Do not initiate intimate physical contact (hugs, touching shoulder, etc) unless the target has previously made similar contact with you.
If everyone follows this rule nobody will ever initiate physical contact.
Do not initiate intimate physical contact (hugs, touching shoulder, etc) unless the target has previously made similar contact with you.
If everyone follows this rule nobody will ever initiate physical contact.
Slashdot has something like this.
Genuine agreement with whimsical annoyance about having to consider actual situations and connotations.
Sounds good to me if you’re going to get all connotative about it.
What about going from “members of subcategory X of category Y are more likely to possess characteristic C” to “In the absence of further information, a particular member of subcategory X is more likely to possess characteristic C than a non-X member of category Y”.
You are saying you can’t go from probabilistic information to certainty. This is a strawman.
I wrote my comment above under the assumption of mjgeddes’ honesty but I also believe they are more likely lying than not lying.
My alternative theories are: mjgeddes is just trolling without any real plan (40%), mjgeddes is planning to laugh at us all for believing something with such an explicitly low prior. (40%), something else (>19%), actually won the lottery: <1%
Yet still I feel the need to give them the benefit of the doubt. I wonder precisely when that social heuristic should be abandoned...
http://lesswrong.com/lw/sc/existential_angst_factory/
You could try self-modifying to not hate evil people (“hate the sin not the sinner”). Here’s some emotional arguments that might help (I make no claim as to their logical coherence):
If there was only one person in existence and they were evil, would you want them to be punished or blessed? Who would it serve to punish them?
If you are going to excuse people with mental illness you are going to have to draw some arbitrary line along the gradient from “purposely evil” to “evil because of mental illness.” Also consider the gradient of moral responsibility from child to adult.
If someone who was once evil completely reformed would you still see value in punishing them? Would you wish you hadn’t punished them while they were still evil?
Although someone may have had a guilty mind at the moment of their crime, do they still at the moment of punishment? What if you are increasing the quantum measure of an abstracted isomorphic experience of suffering?
(bit of irony here :P)
Perhaps acceptable casualties.
I just won the New Zealand national lottery.
Congratulations!
For the sake of people reading this post who may not be familiar with the concept of backwards causality:
As a fun test, I called on any future super intelligences to come to my aid, appealing to the notion of backward causality. Asking for clear evidence of the hand of a superintelligence in the event I won, I choose a number of high significance to me personally. The number I chose was 27, which I placed in all lines of the ticket. (All the other numbers I selected at random).
This is not the typical LW understanding of decision theory. Here’s an example of what “backwards causality” could actually mean:
mjgeddes and lottery employee both believe an agent will be created in the future that likes to grant wishes and will reward people who help grant wishes. The lottery employee somehow knows mjgeddes made a wish, and fudges the lottery results in the hope of a future reward from the wish-granting agent.
Thinking of it as “backwards causality” enacted by the hypothetical future wish-granting agent is a useful way of thinking about certain decision problems but should never preclude a normal, traditional explanation.
Lest anyone claim I am ruining the mood: Praise be to the glorious Eschaton; that acausal spring from which all blessings flow!
The use of external computation (like a human using a computer to solve a math problem or an AI expanding its computational resources) is a special case of inferring information about mathematical statements from your observations about the universe.
What is the general algorithm for accomplishing this in terms of pure observations (no action, observation cycles)? How does the difficulty of the mathematical statements you can infer to be probably true relate to the amount of computation you have expended approximating solomonoff induction?
Um, I don’t see value in excluding statements that have e.g. a forall-exists-forall-exists sequence of quantifiers at the outer level?
I just wanted to show it seems possible to do better and that was easier to do considering only a subset of statements.
For lack of a better idea, I propose to use the uniform prior on it.
If we consider only universally quantified statements: Conjunctive statements in each world are redundant as the world itself is a conjunction anyway. If each world contains only disjunctive statements, shouldn’t the worlds that assign truth to longer statements be more likely, as it is easier to satisfy disjunctive statements with more terms?
We interpret elements X of Pow(S) as “logical worlds”, in which the sentences in X are true and the sentences in (Pow(S) \ X) are false.
Shouldn’t this be (S \ X) instead of (Pow(S) \ X)?
Firstly, it is impossible to assign a numbered utility to each action. That is just not understanding human brain.
Of course, the brain isn’t perfect. The fact that humans can’t always or even can’t usually apply truths doesn’t make them untrue.
Secondly, it is impossible to sum up utilities, give me an example where summing different people utilities make any sense.
Pressing a button kills one person, not pressing the button kills two people. utility(1 death) + utility(1 death) < utility(1 death)
Thirdly, it regards the action as an one-time action. But just it isn’t. If you teach .people to push the fat guy to kill it. You just not only will have three people less dead. You’ll also have a bunch of emotionless people who think it is ok to kill people if it is for the greater good.
Assuming it’s bad to teach consequentialism to people doesn’t make consequentialism wrong. It’s bad to teach people how to make bombs but that doesn’t mean the knowledge to create bombs is incorrect. See Ethical Injunctions
Fourthly, people don’t always come immediately to the truth. You can’t say you should kill the fat guy if you really think that’s gonna save the other people.
Such thought experiments often make unlikely assumptions such as perfect knowledge of consequences. That doesn’t make the conclusions of those thought experiments wrong, it just constrains them to unlikely situations.
Fifthly, if utility is not quantitative, the logic of morality can’t be a computation.
Qualitative analysis is still computable. If humans can do something it is computable.
The discovery of reallity might be a calculation, because you go outside to see.
Solomonoff induction is a formalized model of prediction of future events.
If a choice is hard, that means the alternatives are around equally balanced, right?
Subjectively, sure. But isn’t the important factor the expected marginal utility of further analysis?
For any broken halting oracle there exists a broken halting oracle detector that identifies it and for any broken halting oracle detector there exists a broken halting oracle that escapes detection.
But that can be entirely reduced to the goodness or badness of consequences.
Epiphanies are not necessarily useless or wrong, but cannot effect anything unless they are part of a system created already in motion. You can no more apply an epiphany to an unfocused, inactive human than you can apply it to a rock.
On the face of it I would absolutely disagree with Wilde on that: to live a moral life one absolutely needs to distinguish between good and bad.
But is it necessary to divide people into good and bad? What if you were only to apply goodness and badness to consequences and to your own actions?
The Moral Void, Eliezer Yudkowsky