testing \latex \LaTeX
does anyone know how to label equations and reference them?
@max-kaye u/max-kaye https://www.lesswrong.com/users/max-kaye
testing \latex \LaTeX
does anyone know how to label equations and reference them?
@max-kaye u/max-kaye https://www.lesswrong.com/users/max-kaye
If it’s possible to use decision theory in a deterministic universe, then MWI doesnt make things worse except by removing refraining. However, the role of decision theory in a deterministic universe is pretty unclear, since you can’t freely decide to use it to make a better decision than the one you would have made anyway.
[...]
Deterministic physics excluded free choice. Physics doesn’t.
MWI is deterministic over the multiverse, not per-universe.
A combination where both are fine or equally predicted fails to be a hypothesis.
Why? If I have two independent actions—flipping a coin and rolling a 6-sided die (d6) - am I not able to combine “the coin lands heads 50% of the time” and “the die lands even (i.e. 2, 4, or 6) 50% of the time”?
If you have partial predictions of X1XX0X and XX11XX you can “or” them into X1110X.
This is (very close to) a binary “or”, I roughly agree with you.
But if you try to combine 01000 and 00010 the result will not be 01010 but something like 0X0X0.
This is sort of like a binary “and”. Have the rules changed? And what are they now?
So there is a relationship between the Miller and Popper papers conclusions, and it’s assumptions. Of course there is. That is what I am saying. But you were citing it as an example of a criticism that doesn’t depend on assumptions.
> Your argument proposes a criticism of Popper’s argument
No, it proposes a criticism of your argument … the criticsm that there is a contradiction between your claim that the paper makes no assumptions, and the fact that it evidently does.
I didn’t claim that paper made no assumptions. I claimed that refuting that argument^[1] would not refute CR, and vice versa. Please review the thread, I think there’s been some significant miscommunications. If something’s unclear to you, you can quote it to point it out.
[1]: for clarity, the argument in Q: A proof of the impossibility of inductive probability.
> Reality does not contradict itself
Firstly, epistemology goes first. You don’t know anything about reality without having the means to acquire knowledge.
Inductivism is not compatible with this—it has no way to bootstrap except by some other, more foundational epistemic factors.
Also, you didn’t really respond to my point or the chain of discussion-logic before that. I said an internal contradiction would be a way to refute an idea (as a second example when you asked for examples). you said contradictions being bad is an assumption. i said no, it’s a conclusion, and offered an explanation (which you’ve ignored). In fact, through this discussion you haven’t—as far as I can see—actually been interested in figuring out a) what anything else things or b) where and what you might be wrong about.
Secondly, I didn’t say it was the PNC was actually false.
I don’t think there’s any point talking about this, then. We haven’t had any meaningful discussion about it and I don’t see why we would.
> Bertrand Russell had arguments against that kind of induction...
Looks like the simple organisms and algorithms didn’t listen to him!
I don’t think you’re taking this seriously.
CR doesn’t have good arguments against the other kind of induction, the kind that just predicts future observations on the basis of past ones , the kind that simple organisms and algorithms can do.
This is the old kind of induction; Bertrand Russell had arguments against that kind of induction...
The refutations of that kind of induction are way beyond the bounds of CR.
The assumption that contradictions are bad is a widespread assumption, but it is still an assumption.
Reality does not contradict itself; ever. An epistemology is a theory about how knowledge works. If a theory (epistemic or otherwise) contains an internal contradiction, it cannot accurately portray reality. This is not an assumption, it’s an explanatory conclusion.
I’m not convinced we can get anywhere productive continuing this discussion. If you don’t think contradictions are bad, it feels like there’s going to be a lot of work finding common ground.
But I don’t count it as an example, since I don’t regard it as correct [...]
This is irrational. Examples of relationships do not depend on whether the example is real or not. All that’s required is that the relationship is clear, whether each of us judges the idea itself as true or not doesn’t matter in this case. We don’t need to argue this point anyway, since you provided an example:
In particular , it is based on bivalent logic where, 1 and 0 are the only possible values, but the loud and proud inductivists here base their arguments on probabilistic logic, where propositions have a probability between but not including 1 and 0. So “induction must be based on bivalent logic” is an assumption.
Cool, so do you see how the argument you made is separate from whether inductivism is right or not?
Your argument proposes a criticism of Popper’s argument. The criticism is your conjecture that Popper made a mistake. Your criticism doesn’t rely on whether inductivism is right or not, just whether it’s consistent or not (and consistent according to some principles you hint at). Similarly, if Popper did make a mistake with that argument, it doesn’t mean that CR is wrong, or that Inductivism is wrong; it just means Popper’s criticism was wrong.
Curiously, you say:
But I don’t count it as an example, since I don’t regard it as correct,
Do you count yourself a Bayesian or Inductivist? What probability did you assign to it being correct? And what probability do you generally assign to a false-positive result when you evaluate the correctness of examples like this?
The concept of indexical uncertainty we’re interested in is… I think… uncertainty about which kind of body or position in the universe your seat of consciousness is in, given that there could be more than one.
I’m not sure I understand yet, but does the following line up with how you’re using the word?
Indexical uncertainty is uncertainty around the exact matter (or temporal location of such matter) that is directly facilitating, and required by, a mind. (this could be your mind or another person’s mind)
Notes:
“exact” might be too strong a word
I added “or temporal location of such matter” to cover the sleeping beauty case (which, btw, I’m apparently a halfer or double halfer according to wikipedia’s classifications, but haven’t thought much about it)
Edit/PS: I think my counter-example with Alice, Alex, and Bob still works with this definition.
> I can’t think of an example of an infinite regress except cases where there are other options which stop the regress.
The other options need to be acceptable to both parties!
Sure, or the parties need a rational method of resolving a disagreement on acceptability. I’m not sure why that’s particularly relevant, though.
> I can’t think of an example of an infinite regress except cases where there are other options which stop the regress.
I don’t see how that is an example, principally because it seems wrong to me.
You didn’t quote an example—I’m unsure if you meant to quote a different part?
In any case, what you’ve quoted isn’t an example, and you don’t explain why it seems wrong or what about it is an issue. Do you mean that cases exist where there is an infinite regress and it’s not soluble with other methods?
I’m also not sure why this is particularly relevant.
Are we still talking about the below?
> We can also come up with criticisms that are sort of independent of where they came from, like a new criticism is somewhat linked to idea A but idea A can be wrong in some way without implying the criticism was also wrong
Do you have a concrete example?
I did give you an example (one of Popper’s arguments against inductivism).
A generalised abstract case is where someone of a particular epistemology criticises another school of epistemology on the grounds of an internal contradiction. A criticism of that person’s criticism does not necessarily relate to that person’s epistemology, and vice versa.
Here are some (mostly critical) notes I made reading the post. Hope it helps you figure things out.
> * If you can’t say something nice, don’t say anything at all.
This is such bad advice (I realise eapache is giving it as an example of common moral sayings; though I note it’s neither an aphorism nor a phrase). Like maybe it applies when talking to a widow at her husbands funeral?
”You’re going to fast”, “you’re hurting me”, “your habit of overreaching hurts your ability to learn”, etc. These are good things to say in the right context, and not saying them allows bad things to keep happening.
> The manosphere would happily write me off as a “beta male”, and I’m sure Jordan Peterson would have something weird to say about lobsters and serotonin.
I don’t know why this is in here, particularly the second clause—I’m not sure it helps with anything. It’s also mean.
> This combination of personality traits makes …
The last thing you talk about is what Peterson might say, not your own personality. Sounds like you’re talking about the personality trait(s) of “[having] something weird to say about lobsters and serotonin”.
> This combination of personality traits makes Postel’s Principle a natural fit for defining my own behaviour.
I presume you mean “guiding” more than “defining”. It could define standards you hold for your own behaviour.
> *[People who know me IRL will point out that in fact I am pretty judgemental a lot of the time. But I try and restrict my judginess … to matters of objective efficiency, where empirical reality will back me up, and avoid any kind of value-based judgement. E.g. I will judge you for being an ineffective, inconsistent feminist, but never for holding or not holding feminist values.]*
This is problematic, e.g. ‘I will judge you for being an ineffective, inconsistent nazi, but never for holding or not holding nazi values’. Making moral judgements is important. That said, judging things poorly is (possibly very) harmful. (Examples: treating all moral inconsistencies as equally bad, or treating some racism as acceptable b/c of the target race)
> annoyingly large number of people
I think it’s annoyingly few. A greater population is generally good.
> There is clearly no set of behaviours I could perform that will satisfy all of them, so I focus on applying Postel’s Principle to the much smaller set of people who are in my “social bubble”
How do you know you aren’t just friends with people who approve of this?
What do you do WRT everyone else? (e.g. shop-keeps, the mailman, taxi drivers)
> If I’m not likely to interact with you soon, or on a regular basis, then I’m relatively free to ignore your opinion.
Are you using Postel’s Principle *solely* for approval? (You say “The more people who like me, the more secure my situation” earlier, but is there another reason?)
> Talking about the “set” of people on whom to apply Postel’s Principle provides a nice segue into the formal definitions that are implicit in the English aphorism.
How can formal definitions be implicit?
Which aphorism? You provided 5 things you called aphorisms, but you haven’t called Postel’s Principle that.
> … [within the context of your own behaviour] something is only morally permissible for me if it is permissible for *all* of the people I am likely to interact with regularly.
What about people you are friends with for particular purposes? Example: a friend you play tennis with but wouldn’t introduce to your parents.
What if one of those people decides that Postel’s Principle is not morally permissible?
> … [within the context of other ppl’s behaviour] it is morally permissible if it is permissible for any of the people I am likely to interact with regularly.
You’re basing your idea on which things are generally morally permissible on what other people think. (Note: you do acknowledge this later which is good)
This cannot deal with contradictions between people’s moral views (a case where neither of those people necessarily have contradictions, but you do).
It also isn’t an idea that works in isolation. Other people might have moral views b/c they have principles from which they derive those views. They could be mistaken about the principles or their application. In such a case would you—even if you realised they were mistaken—still hold their views as permissible? How is that rational?
> Since the set of actions that are considered morally permissible for me are defined effectively by my social circle, it becomes of some importance to intentionally manage my social circle.
This is a moral choice, by what moral knowledge can you make such a choice? I presume you see how using Postel’s Principle here might lead you into a recursive trap (like an echo-chamber), and how it limits your ability to error correct if something goes wrong. Ultimately you’re not in control of what your social circle becomes (or who’s in and who’s out).
> It would be untenable to make such different friends and colleagues that the intersection of their acceptable actions shrinks to nothing.
What? Why?
Your using of ‘untenable’ is unclear; is it just impractical but something you’d do if it were practical, or is it unthinkable to do so, or is it just so difficult it would never happen? (Note: I think option 3 is not true, btw)
> (since inaction is of course its own kind of action)
It’s good you realise this.
> In that situation I would be forced to make a choice (since inaction is of course its own kind of action) and jettison one group of friends in order to open up behavioural manoeuvring space again.
I can see the logic of why you’d *want* to do this, but I can’t see *how* you’d do it. Also, I don’t see why you’d care to if it wasn’t causing problems. I have friends and associates I value which I’d have to cut-loose if I were to follow Postel’s P. That would harm me, so how could it be moral to?
It would harm you too, unless the friends are a) collectively and individually not very meaningful (but then why be friends at all?) or b) not providing value to your life anyway (so why be friends at all?). Maybe there are other options?
> Unfortunately, it sometimes happens that people change their moral stances, …
Why is this a bad thing!??!? It’s **good** to learn you were wrong and improve your values to reflect that.
You expand the above with “especially when under pressure from other people who I may not be interacting with directly”—I’d argue that’s not *necessarily* changing one’s preference, it’s just that the person is behaving like that to please someone else. Hard to see why that would matter unless it was like within the friend group itself or impacted the person so much that they couldn’t spend time with you (the second example being something that happens alongside moral pressure with e.g. domestic abuse, so might be something to seriously consider).
> tomorrow one of my friends could decide they’re suddenly a radical Islamist and force me with a choice.
You bring up a decent problem with your philosophy, but then say:
> While in some sense “difficult”, many of these choices end up being rather easy; I have no interest in radical Islam, and so ultimately how close I was to this friend relative to the rest of my social circle matters only in the very extreme case where they were literally my only acquaintance worth speaking of.
First, “many” is not “all” so you still have undefined behaviour (like what to do in these situations). Secondly, who cares if you have an interest in radical islam? A friend of yours suddenly began adhering to a pro-violence anti-reason philosophy. I don’t think you need Postel’s P. to know you don’t want to casually hang with them again.
So I think this is a bad example for two reasons:
1. You dismiss the problem because “many of these choices end up being rather easy”, but that’s a bad reason to dismiss it, and I really hope many of those choices are not because a friend has recently decided terrorism might be a good hobby.
2. If you do it just b/c you don’t have an interest that doesn’t cover all cases, but more importantly to do so for that reason is to reject deeper moral explanations. How do you know you’re “on the right side of history” if you can’t judge it and refuse the available moral knowledge we have?
> Again unfortunately, it sometimes happens that large groups of people change their moral stances all at once. … This sort of situation also forces me with a choice, and often a much more difficult one. … If I expect a given moral meme to become dominant over the next decade, it seems prudent to be “on the right side of history” regardless of the present impact on my social circle.
I agree that you shouldn’t take your friends’ moral conclusions into account when thinking about big societal stuff. But the thing about the “right side of history” is that you can’t predict it. Take the US civil war—with your Postel’s P. inspired morality, your judgements would depend on which state you were in. Leading up to things you’d probably judge the dominant local view the one that would endure. If you didn’t judge the situation like that, it means you would have used some other moral knowledge that isn’t part of Postel’s P.
> However what may be worse than any clean break is the moment just before, trying to walk the knife edge of barely-overlapping morals in the desperate hope that the centre can hold.
I agree, that sounds like a very uncomfortable situation.
> Even people who claim to derive their morality from first principles often end up with something surprisingly close to their local social consensus.
Why is this not by design? I think it’s natural for ppl to mostly agree with their friend group on particular moral judgements (moral explanations can be a whole different ball game). I don’t think Postel’s P. need be involved.
Additionally: social dynamics are such that a group can be very *restrictive* in regards to what’s acceptable, and often treat harshly those members who are too liberal in what they accept. (Think Catholics in like the 1600s or w/e)
----
I think the programmingisterrible post is good.
> If some data means two different things to different parts of your program or network, it can be exploited—Interoperability is achieved at the expense of security.
Is something like *moral security* important to you? Maybe it’s moot because you don’t have anyone trying to maliciously manipulate you, but worth thinking about if you hold the keys to any accounts, servers, etc.
> The paper, and other work and talks from the LANGSEC group, outlines a manifesto for language based security—be precise and consistent in the face of ambiguity
Here tef (the author) points out that preciseness and consistency (e.g. having and adhering to well formed specs) are a way to avoid the bad things about Postel’s P. Do you agree with this? Are your own moral views “precise and consistent”?
> Instead of just specifying a grammar, you should specify a parsing algorithm, including the error correction behaviour (if any).
This is good, and I think applies to morality: you should be able to handle any moral situation, know the “why” behind any decision you make, and know how you avoid errors in moral judgements/reasoning.
Note: “any moral situation” is fine for me to say here b/c “don’t make judgements on extreme or wacky moral hypotheticals” can be part of your moral knowledge.
Hmm. It appears to me that Qualia are whatever observations affect indexical claims, and anything that affects indexical claims is a qualia
I don’t think so, here is a counter-example:
Alice and Bob start talking in a room. Alice has an identical twin, Alex. Bob doesn’t know about the twin and thinks he’s talking to Alex. Bob asks: “How are you today?”. Before Alice responds, Alex walks in.
Bob’s observation of Alex will surprise him, and he’ll quickly figure out that something’s going on. But more importantly: Bob’s observation of Alex alters the indexical ‘you’ in “How are you today?” (at least compared to Bob’s intent, and it might change for Alice if she realises Bob was mistaken, too).
I don’t think this is anything close to describing qualia. The experience of surprise can be a quale, the feeling of discovering something can be a quale (eureka moments), the experience of the colour blue is a quale, but the observation of Alex is not.
Do you agree with this? (It’s from https://plato.stanford.edu/entries/indexicals/)
An indexical is, roughly speaking, a linguistic expression whose reference can shift from context to context. For example, the indexical ‘you’ may refer to one person in one context and to another person in another context.
Btw, ‘qualia’ is the plural form of ‘quale’
It’s a bad thing if ideas can’t be criticised at all, but it’s also a bad thing if the relationship of mutual criticism is cyclic, if it doesn’t have an obvious foundation or crux.
Do you have an example? I can’t think of an example of an infinite regress except cases where there are other options which stop the regress. (I have examples of these, but they’re contrived)
> We can also come up with criticisms that are sort of independent of where they came from, like a new criticism is somewhat linked to idea A but idea A can be wrong in some way without implying the criticism was also wrong
Do you have a concrete example?
I think some of the criticisms of inductivism Popper offered are like this. Even if Popper was wrong about big chunks of critical rationalism, it wouldn’t necessarily invalidate the criticisms. Example: A proof of the impossibility of inductive probability.
(Note: I don’t think Popper was wrong but I’m also not sure it’s necessary to discuss that now if we disagree; just wanted to mention)
> And it’s always possible both are wrong, anyway
Kind of, but “everything is wrong” is vulgar scepticism.
I’m not suggesting anything I said was a reason to think both theories wrong, I listed it because it was a possibility I didn’t mention in the other paragraphs, and it’s a bit of a trivial case for this stuff (i.e. if we come up with a reason both are wrong then we don’t have to worry about them anymore if we can’t answer that criticism)
FYI this usage of the term *universality* overloads a sorta-similar concept David Deutsch (DD) uses in *The Fabric of Reality* (1997) and in *The Beginning of Infinity* (2011). In BoI it’s the subject of Chapter 6 (titled “The Jump to Universality”) I don’t know what history the idea has prior to that. Some extracts are below to give you a bit of an idea of how DD uses the word ‘universality’.
Part of the reason I mention this is the reference to Popper; DD is one of the greatest living Popperians and has made significant contributions to critical rationalism. I’d consider it somewhat odd if the author(s) (Paul Christiano?) weren’t familiar with DD’s work given the overlap. DD devotes a fair proportion of BoI to discussing universality and AGI directly—both separately and together—and another fair proportion of the book to foundations for that discussion.
---
General comment: Your/this use of ‘universality’ (particularly in AN #81) feels a lot like the idea of *totality* (as in a total function, etc). more like it’s omniscience rather than an important epistemic property.
---
Some extracts from BoI on universality (with a particular but not exclusive focus on computation):
Here is an even more speculative possibility. The largest benefits of any universality, beyond whatever
parochial problem it is intended to solve, come from its being useful for further innovation. And innovation is unpredictable. So, to appreciate universality at the time of its discovery, one must either value abstract knowledge for its own sake or expect it to yield unforeseeable benefits. In a society that rarely experienced change, both those attitudes would be quite unnatural. But that was reversed with the Enlightenment, whose quintessential idea is, as I have said, that progress is both desirable and attainable. And so, therefore, is universality.
---
Babbage originally had no conception of computational universality. Nevertheless, the Difference Engine already comes remarkably close to it – not in its repertoire of computations, but in its physical constitution. To program it to print out a given table, one initializes certain cogs. Babbage eventually realized that this programming phase could itself be automated: the settings could be prepared on punched cards like Jacquard’s, and transferred mechanically into the cogs. This would not only remove the main remaining source of error, but also increase the machine’s repertoire. Babbage then realized that if the machine could also punch new cards for its own later use, and could control which punched card it would read next (say, by choosing from a stack of them, depending on the position of its cogs), then something qualitatively new would happen: the jump to universality. Babbage called this improved machine the Analytical Engine. He and his colleague the mathematician Ada, Countess of Lovelace, knew that it would be capable of computing anything that human ‘computers’ could, and that this included more than just arithmetic: it could do algebra, play chess, compose music, process images and so on. It would be what is today called a universal classical computer. (I shall explain the significance of the proviso ‘classical’ in Chapter 11, when I discuss quantum computers, which operate at a still higher level of universality.)
---
The mathematician and computer pioneer Alan Turing later called this mistake ‘Lady Lovelace’s objection’. It was not computational universality that Lovelace failed to appreciate, but the universality of the laws of physics. Science at the time had almost no knowledge of the physics of the brain. Also, Darwin’s theory of evolution had not yet been published, and supernatural accounts of the nature of human beings were still prevalent. Today there is less mitigation for the minority of scientists and philosophers who still believe that AI is unattainable. For instance, the philosopher John Searle has placed the AI project in the following historical perspective: for centuries, some people have tried to explain the mind in mechanical terms, using similes and metaphors based on the most complex machines of the day. First the brain was supposed to be like an immensely complicated set of gears and levers. Then it was hydraulic pipes, then steam engines, then telephone exchanges – and, now that computers are our most impressive technology, brains are said to be computers. But this is still no more than a metaphor, says Searle, and there is no more reason to expect the brain to be a computer than a steam engine.
But there is. A steam engine is not a universal simulator. But a computer is, so expecting it to be able to do whatever neurons can is not a metaphor: it is a known and proven property of the laws of physics as best we know them. (And, as it happens, hydraulic pipes could also be made into a universal classical computer, and so could gears and levers, as Babbage showed.)
---
Because of the necessity for error-correction, all jumps to universality occur in digital systems. It is why spoken languages build words out of a finite set of elementary sounds: speech would not be intelligible if it were analogue. It would not be possible to repeat, nor even to remember, what anyone had said. Nor, therefore, does it matter that universal writing systems cannot perfectly represent analogue information such as tones of voice. Nothing can represent those perfectly. For the same reason, the sounds themselves can represent only a finite number of possible meanings. For example, humans can distinguish between only about seven different sound volumes. This is roughly reflected in standard musical notation, which has approximately seven different symbols for loudness (such as p, mf, f, and so on). And, for the same reason, speakers can only intend a finite number of possible meanings with each utterance. Another striking connection between all those diverse jumps to universality is that they all happened on Earth. In fact all known jumps to universality happened under the auspices of human beings – except one, which I have not mentioned yet, and from which all the others, historically, emerged. It happened during the early evolution of life.
This is commentary I started making as I was reading the first quote. I think some bits of the post are a bit vague or confusing but I think I get what you mean by anthropic measure, so it’s okay in service to that. I don’t think equating anthropic measure to mass makes sense, though; counter examples seem trivial.
> The two instances can make the decision together on equal footing, taking on exactly the same amount of risk, each- having memories of being on the right side of the mirror many times before, and no memories of being on the wrong- tacitly feeling that they will go on to live a long and happy life.
feels a bit like like quantum suicide.
note: having no memories of being on the wrong side does not make this any more pleasant an experience to go through, nor does it provide any reassurance against being the replica (presuming that’s the one which is killed).
> As is custom, the loser speaks first.
naming the characters papers and scissors is a neat idea.
> Paper wonders, what does it feel like to be… more? If there were two of you, rather than just one, wouldn’t that mean something? What if there were another, but it were different?… so that-
>
> [...]
>
> Scissors: “What would it feel like? To be… More?… What if there were two of you, and one of me? Would you know?”
isn’t paper thinking in 2nd person but then scissors in 1st? so paper is thinking about 2 ppl but scissors about 3 ppl?
> It was true. The build that plays host to the replica (provisionally named “Wisp-Complete”), unlike the original’s own build, is effectively three brains interleaved
Wait, does this now mean there’s 4 ppl? 3 in the replica and 1 in the non-replica?
> Each instance has now realised that the replica- its brain being physically more massive- has a higher expected anthropic measure than the original.
Um okay, wouldn’t they have maybe thought about this after 15 years of training and decades of practice in the field?
> It is no longer rational for a selfish agent in the position of either Paper nor Scissors to consent to the execution of the replica, because it is more likely than not, from either agent’s perspective, that they are the replica.
I’m not sure this follows in our universe (presuming it is rational when it’s like 1:1 instead of 3:1 or whatever). like I think it might take different rules of rationality or epistemology or something.
> Our consenters have had many many decades to come to terms with these sorts of situations.
Why are Paper and Scissors so hesitant then?
> That gives any randomly selected agent that has observed that it is in the mirror chamber a 3⁄4 majority probability of being the replica, rather than being the original.
I don’t think we’ve established sufficiently that the 3 minds 1 brain thing are actually 3 minds. I don’t think they qualify for that, yet.
> But aren’t our consenters perfectly willing to take on a hefty risk death in service of progress? No. Most Consenters aren’t. Selling one’s mind and right to life in exchange for capital would be illegal.
Why would it be a hefty risk? Isn’t it 0% chance of death? (the replicant is always the one killed)
> In a normal mirror chamber setup, when the original enters the mirror chamber, they are confident that it is the original who will walk out again. They are taking on no personal risk. None is expected, and none is required.
Okay we might be getting some answers soon.
> The obvious ways of defecting from protocol- an abdication of the responsibility of the consenter, a refusal to self-murder, an attempt to replicate without a replication license- are taken as nothing less than Carcony.
Holy shit this society is dystopic.
> It would be punished with the deaths of both copies and any ancestors of less than 10 years of divergence or equivalent.
O.O
> But if, somehow, the original were killed? What if neither instance of the Consenter signed for their replica’s execution, and the replica were left alive. That would not be considered Carcony. It would not even be considered theft- because a brain always belongs to its mind.
I’m definitely unclear on the process for deciding; wouldn’t like only one guillatine be set up and both parties affixed in place? (Moreover, why wouldn’t the replica just be a brain and not in a body, so no guillatine, and just fed visual inputs along with the mirror-simulation in the actual room—sounds feasilble)
> What if neither instance of the Consenter signed for their replica’s execution
Wouldn’t this be an abdication of responsibility as mentioned in the prev paragraph?
> So, do you see now? Do you see how Consenter Nai Paper-Chell-Glass-Stratton was motivated by a simple alignment of the payoff matrices?
Presumably to run away with other-nai-x3-in-a-jar-stratton?
> Paper: “You wouldn’t do that to me. Look… if you’re the original… And I do myself, and I’m the replica. I wont actually be dead, because if you destroy a representation of your brain case in a simulation that doesn’t actually destroy you in real life. I might not even desync. Whatever happens, I’ll still be alive after the penetration so I’ll know I’m the replica, but you might not know. It might look like I’m really dead. And you’ll have no incentive to follow through and do yourself at that point.”
> Scissors: “I still don’t see it.”
So both parties sign for the destruction of the replica, but only the legit Nai’s signing will actually trigger the death of the replica. The replica Nai’s signing will only SIMULATE the death of a simulated replica Nai (the “real” Nai being untouched) - though if this happened wouldn’t they ‘desync’ - like not be able to communicate? (presuming I understand your meaning of desync)
> Paper: ”… If you’re the replica, it doesn’t matter whether you do yourself, you’ll still get saved either way, but you’re incented not to do yourself because having a simulated spike stuck through your simulated head will probably be pretty uncomfortable. But also, if you’re the original, you’re sort of doomed either way, you’re just incented to run off and attempt Carcony, but there’s no way the replica would survive you doing that, and you probably wouldn’t either, you wouldn’t do that to me. Would you?”
I don’t follow the “original” reasoning; if you’re the original and you do yourself the spike goes through the replica’s head, no? So how do you do Carcony at that point?
> The test build is an order of magnitude hardier than Nai’s older Cloud-Sheet. As such, the testing armature is equipped to apply enough pressure to pierce the Cloud-Sheet’s shielding, and so it was made possible for the instances to conspire to commit to the legal murder of Consenter Nai Scissors Bridger Glass Stratton.
So piercing the sheilding of the old brain (cloud-sheet) is important b/c the various Nai’s (ambiguous: all 4 or just 3 of them) are conspiring to murder normal-Nai and they need to pierce the cloud-sheet for that. But aren’t most new brains they test hardier than the one Nai is using? So isn’t it normal that the testing-spike could pierce her old brain?
> A few things happened in the wake of Consenter Paper Stratton’s act of praxis.
omit “act of”, sorta redundant.
> but most consenter-adjacent philosophers took the position that it was ridiculous to expect this to change the equations, that a cell with thrice the mass should be estimated to have about thrice the anthropic measure, no different.
This does not seem consistent with the universe. If that was the case then it would have been an issue going smaller and smaller to begin with, right?
Also, 3x lattices makes sense for error correction (like EC RAM), but not 3x mass.
> The consenter union banned the use of mirror chambers in any case where the reasonable scoring of the anthropic measure of the test build was higher than the reasonable scoring of a consenter’s existing build.
this presents a problem for testing better brains; curious if it’s going to be addressed.
I just noticed “Consenter Nai Paper-Chell-Glass-Stratton”—the ‘paper’ referrs to the rock-paper-sissors earlier (confirmed with a later Nai reference). She’s only done this 4 times now? (this being replication or the mirror chamber)
earlier “The rational decision for a selfish agent instead becomes...” is implying the rational decision is to execute the original—presumably this is an option the consenter always has? like they get to choose which one is killed? Why would that be an option? Why not just have a single button that when they both press it, the replica is died; no choice in the matter.
> Scissors: “I still don’t see it.”
Scissors is slower so scissors dies?
> Paper wonders, what does it feel like to be… more? If there were two of you, rather than just one, wouldn’t that mean something? What if there were another, but it were different?… so that-
I thought this was Paper thinking not wondering aloud. In that light
> Scissors: “What would it feel like? To be… More?… What if there were two of you, and one of me? Would you know?”
looks like partial mind reading or something, like super mental powers (which shouldn’t be a property of running a brain 3x over but I’m trying to find out why they concluded Scissors was the original)
> Each instance has now realised that the replica- its brain being physically more massive- has a higher expected anthropic measure than the original.
At this point in the story isn’t the idea that it has a higher anthropic measure b/c it’s 3 brains interleaved, not 1? while the parenthetical bit (“its brain … massive”) isn’t a reason? (Also, the mass thing that comes in later; what if they made 3 brains interleaved with the total mass of one older brain?)
Anyway, I suspect answering these issues won’t be necessary to get an idea of anthropic measure.
(continuing on)
> Anthropic measure really was the thing that caused consenter originals to kill themselves.
I don’t think this is rational FYI
> And if that wasn’t true of our garden, we would look out along the multiverse hierarchy and we would know how we were reflected infinitely, in all variations.
> [...]
> It became about relative quantities.
You can’t take relative quantities of infinities or subsets of infinities (it’s all 100% or 0%, essentially). You can have *measures*, though. David Deutsch’s Beginning of Infinity goes into some detail about this—both generally and wrt many worlds and the multiverse.
I would like to ask him if he maintains a distinction between values and preferences, morality and (well formed) desire.
I think he’d say ‘yes’ to a distinction between morality and desire, at least in the way I’m reading this sentence. My comment: Moral statements are part of epistemology and not dependent on humans or local stuff. However, as one learns more about morality and considers their own actions, their preferences progressively change to be increasingly compatible with their morality.
Being a fallibilist I think he’d add something like or roughly agree with: the desire to be moral doesn’t mean all our actions become moral, we’re fallible and make mistakes, so sometimes we think we’re doing something moral that turns out not to be (at which point we have some criticism for our behaviour and ways to improve it).
(I’m hedging my statements here b/c I don’t want to put words in DD’s mouth; these are my guesses)
I prefer schools that don’t.
Wouldn’t that just be like hedonism or something like that? I’m not sure what would be better about a school that doesn’t.
But I’ve never asked those who do whether they have a precise account of what moral values are, as a distinct entity from desires, maybe they have a good and useful account of values, where they somehow reliably serve the aggregate of our desires, that they just never explain because they think everyone knows it intuitively, or something. I don’t. They seem too messy to prove correctness of.
Why is the definition of values and the addition of “moral” not enough?
Definitions (from google):
[moral] values: [moral] principles or standards of behaviour; one’s judgement of what is important in life.
principle: a fundamental truth or proposition that serves as the foundation for a system of belief or behaviour or for a chain of reasoning.
I’d argue for a slightly softer definition of principle, particularly it should account for: moral values and principles can be conclusions, they don’t have to be taken as axiomatic, however, they are *general* and apply universally (or near-universally).
They seem too messy to prove correctness of.
Sure, but we can still learn things about them, and we can still reason about whether they’re wrong or right.
Here’s a relevant extract from BoI (about 20% through the book, in ch5 - there’s a fair amount of presumed reading at this point)
In the case of moral philosophy, the empiricist and justificationist misconceptions are often expressed
in the maxim that ‘you can’t derive an ought from an is’ (a paraphrase of a remark by the Enlightenment philosopher David Hume). It means that moral theories cannot be deduced from factual knowledge. This has become conventional wisdom, and has resulted in a kind of dogmatic despair about morality: ‘you can’t derive an ought from an is, therefore morality cannot be justified by reason’. That leaves only two options: either to embrace unreason or to try living without ever making a moral judgement. Both are liable to lead to morally wrong choices, just as embracing unreason or never attempting to explain the physical world leads to factually false theories (and not just ignorance).
Certainly you can’t derive an ought from an is, but you can’t derive a factual theory from an is either. That is not what science does. The growth of knowledge does not consist of finding ways to justify one’s beliefs. It consists of finding good explanations. And, although factual evidence and moral maxims are logically independent, factual and moral explanations are not. Thus factual knowledge can be useful in criticizing moral explanations.
For example, in the nineteenth century, if an American slave had written a bestselling book, that event would not logically have ruled out the proposition ‘Negroes are intended by Providence to be slaves.’ No experience could, because that is a philosophical theory. But it might have ruined the explanation through which many people understood that proposition. And if, as a result, such people had found themselves unable to explain to their own satisfaction why it would be Providential if that author were to be forced back into slavery, then they might have questioned the account that they had formerly accepted of what a black person really is, and what a person in general is – and then a good person, a good society, and so on.
In what way are the epistemologies actually in conflict?
Well, they disagree on how to judge ideas, and why ideas are okay to treat as ‘true’ or not.
There are practical consequences to this disagreement; some of the best CR thinkers claim MIRI are making mistakes that are detrimental to the future of humanity+AGI, for **epistemic** reasons no less.
My impression is that it is more just a case of two groups of people who maybe don’t understand each other well enough, rather than a case of substantiative disagreement between the useful theories that they have, regardless of what DD thinks it is.
I have a sense of something like this, too, both in the way LW and CR “read” each other, and in the more practical sense of agreement in the outcome of many applications.
I do still think there is a substantive disagreement, though. I also think DD is one of the best thinkers wrt CR and broadly endorse ~everything in BoI (there are a few caveats, a typo and improvements to how-to-vary, at least; I’ll mention if more come up. The yes/no stuff I mentioned in another post is an example of one of these caveats). I mention endorsing BoI b/c if you wanted to quote something from BoI it’s highly likely I wouldn’t have an issue with it (so is a good source of things for critical discussion).
Bayes does not disagree with true things, nor does it disagree with useful rules of thumb.
CR agrees here, though there is a good explanation of “rules of thumb” in BoI that covers how, when, and why rules of thumb can be dangerous and/or wrong.
Whatever it is you have, I think it will be conceivable from bayesian epistemological primitives, and conceiving it in those primitives will give you a clearer idea of what it really is.
This might be a good way try to find disagreements between BE (Bayesian Epistemology) and CR in more detail. It also tests my understanding of CR (and maybe a bit of BE too).
I’ve given some details on the sorts of principles in CR in my replies^1, if you’d like to try this do you have any ideas on where to go next? I’m happy to provide more detail with some prompting about the things you take issue with or you think need more explanation / answering criticisms.
[1]: or, at least my sub-school of thought; some of the things I’ve said are actually controversial within CR, but I’m not sure they’ll be significant.
It’s not uncommon for competing ideas to have that sort of relationship. This is a good thing, though, because you have ways of making progress: e.g. compare the two ideas to come up with an experiment or create a more specific goal. Typically refuting one of those ideas will also answer or refute the criticism attached to it.
If a theory doesn’t offer some refutation for competing theories then that fact is (potentially) a criticism of that theory.
We can also come up with criticisms that are sort of independent of where they came from, like a new criticism is somewhat linked to idea A but idea A can be wrong in some way without implying the criticism was also wrong. It doesn’t make either theory A or B more likely or something when this happens; it just means there are two criticisms not one.
And it’s always possible both are wrong, anyway.
I’m happy to do this. On the one hand I don’t like that lots of replies creates more pressure to reply to everything, but I think if we’ll probably be fine focusing on the stuff we find more important if we don’t mind dropping some loose ends. If they become relevant we can come back to them.
I went through the maths in OP and it seems to check out. I think the core inconsistency is that Solomonoff Induction implies l(X∪Y)=l(X) which is obviously wrong. I’m going to redo the maths below (breaking it down step-by-step more). curi has 2l(X)=l(X) which is the same inconsistency given his substitution. I’m not sure we can make that substitution but I also don’t think we need to.
Let X and Y be independent hypotheses for Solomonoff induction.
According to the prior, the non-normalized probability of X (and similarly for Y) is: (1)
P(X)=12l(X)
what is the probability of X∪Y? (2)
P(X∪Y)=P(X)+P(Y)−P(X∩Y)=12l(X)+12l(Y)−12l(X)⋅12l(Y)=12l(X)+12l(Y)−12l(X)⋅2l(Y)=12l(X)+12l(Y)−12l(X)+l(Y)
However, by Equation (1) we have: (3)
P(X∪Y)=12l(X∪Y)
thus (4)
12l(X∪Y)=12l(X)+12l(Y)−12l(X)+l(Y)
This must hold for any and all X and Y.
curi considers the case where X and Y are the same length, starting with Equation (4), we get (5):
12l(X∪Y)=12l(X)+12l(Y)−12l(X)+l(Y)=12l(X)+12l(X)−12l(X)+l(X)=22l(X)−122l(X)=12l(X)−1−122l(X)
but (6)
12l(X)−1≫122l(X)
and (7)
0≈122l(X)
so: (8)
12l(X∪Y)≃12l(X)−1∴l(X∪Y)≃l(X)−1□
curi has slightly different logic and argues l(X∪Y)≃2l(X) which I think is reasonable. His argument means we get l(X)≃2l(X). I don’t think those steps are necessary but they are worth mentioning as a difference. I think Equation (8) is enough.
I was curious about what happens when l(X)≠l(Y). Let’s assume the following: (9)
l(X)<l(Y)∴12l(X)≫12l(Y)
so, from Equation (2): (10)
P(X∪Y)=12l(X)+12l(Y)−12l(X)+l(Y)liml(Y)→∞P(X∪Y)=12l(X)+12l(Y)0−12l(X)+l(Y)0∴P(X∪Y)≃12l(X)
by Equation (3) and Equation (10): (11)
12l(X∪Y)≃12l(X)∴l(X∪Y)≃l(X)⇒l(Y)≃0
but Equation (9) says l(X)<l(Y) --- this contradicts Equation (11).
So there’s an inconsistency regardless of whether l(X)=l(Y) or not.