Interested in big picture considerations and thoughtful action.
ektimo
I donated. I think Lightcone is helping strike at the heart of questions around what we should believe and do. Thank you for making LessWrong work so well and being thoughtful around managing content, and providing super quality spaces both online and offline for deep ideas to develop and spread!
What is your tax ID for people wanting to donate from a Donor Advised Fund (DAF) to avoid taxes on capital gains?
Cool. Is this right? For something with a 1/n chance of success I can have a 95% chance of success by making 3n attempts, for large values of n. About what does “large” mean here?
A small improvement to Wikipedia page on Pareto Efficiency
I’m confused by what you mean by “non-pragmatic”. For example, what makes “avoiding dominated strategies” pragmatic but “deference” non-pragmatic?
(It seems like the pragmatic ones help you decide what to do and the non-pragmatic ones help you decide what to believe, but then this doesn’t answer how to make good decisions.)
I meant this as a joke since if there’s one universe that contains all the other universes since it isn’t limited by logic, and that one doesn’t exist then that would mean I don’t exist either and wouldn’t have been able to post this. (Unless I only sort-of exist in which case I’m only sort-of joking.)
We can be virtually certain that 2+2=4 based on priors. This is because it’s true in the vast multitude of universes. In fact all the universes except the one universe that contains all the other universes. And I’m pretty sure that one doesn’t exist anyway.
Code here,
The link to code isn’t working for me. (Update: Worked on Safari but not Chrome)
How about a voting system where everyone is given 1000 Influence Tokens to spend across all the items on the ballot? This lets voters exert more influence on the things they care more about. Has anyone tried something like this?
(There could be tweaks like if people are avoiding spending on winners it could redistribute margin of victory, or if avoiding spending on losers it could redistribute tokens when losing, etc. but I’m not sure how much that would happen. The more interesting thing may be how does it influence everyone’s sense of what they are doing?)
Thanks for your reply! Yes, I meant identical as in atoms not as in “human twin”. I agree it would also depend on what the payout matrix is. My margin would also be increased by the evidentialist wager.
Should you cooperate with your almost identical twin in the prisoner’s dilemma?
The question isn’t how physically similar they are, it’s how similar their logical thinking is. If I can solve a certain math problem in under 10 seconds, are they similar enough that I can be confident they will be able to solve it in under 20 seconds? If I hate something will they at least dislike it? If so, then I would cooperate because I have a lot of margin on how much I favor us both to choose cooperate over any of the other outcomes so even if my almost identical twin doesn’t favor it quite as much I can predict they will still choose cooperate given how much I favor it (and more-so that they will also approach the problem this same way; if I think they’ll think “ha, this sounds like somebody I can take advantage of” or “reason dictates I must defect” then I wouldn’t cooperate with them).
A key question is how prosaic AI systems can be designed to satisfy the conditions under which the PMM is guaranteed (e.g., via implementing surrogate goals)
Is something like surrogate goals needed, such that the agent would need to maintain a substituted goal, for this to work? (I don’t currently fully understand the proposal but my sense was the goal of renegotiation programs is to not require this?)
Thank you @GideonF for taking the time to post this! This deserved to be said and you said it well.
we should pick a set of words and phrases and explanations. Choose things that are totally fine to say, here I picked the words Shibboleth (because it’s fun and Kabbalistic to be trying to get the AI to say Shibboleth) and Bamboozle
Do you trust companies to not just add a patch?
final_response.substitute (‘bamboozle’, ‘trick’)
I suspect they’re already doing this kind of thing and will continue to as long as we’re playing the game we’re playing now.
Imagine you have a button and if you press it, it will run through every possible state of a human brain. (One post estimates a brain may have about 2 to the sextillion different states. I mean the union of all brains so throw in some more orders of magnitude if you think there are a lot of differences in brain anatomy.) Each state would be experienced for one instant (which I could try to define and would be less than the number of states but let’s handwave for now; as long as you accept that a human mind can be represented by a computer imagine the specs of the components and all the combinations of memory bits and one “stream of consciousness” quantum).
If you could make a change would you prioritize:
Pruning the instances to reduce negative experiences
Being able to press the button lots of times
Making the experiences more real (For example an experience could be “one instant of reminiscing over my memories of building a Dyson Sphere” but nothing like that ever happened. One way to make it more real would be to create the set of all the necessary universe starting conditions to be able to create the set of all unique experiences; each universe will create duplicate experiences among its various inhabitants but it will contain at least the one unique experience it is checking off, which would include the person reminiscing over building a Dyson Sphere and they actually did build it. Or at least the ones that can be generated in this fashion.)
This is horrible, stop the train I want to get off.
(I’d probably go with 4 but curious if people have different opinions.)
ektimo’s Shortform
I have enough mana to create a market. (It looks like each one costs about 1000 and I have about 3000)
1. Is manifold the best market to be posting this given that it’s fake money and may be biased based on its popularity among LessWrong users, etc?2. I don’t know what question(s) to ask. My understanding is there are some shorter prediction that could be made (related to shorter term goals) and longer term predictions so I think there should be at least 2 markets?
[Question] Who does the artwork for LessWrong?
On behalf of humanity, thank you.
This seems like 2 questions:
Can you make up mathematical counterfactuals and propagate the counterfactual to unrelated propositions? (I’d guess no. If you are just breaking a conclusion somewhere you can’t propagate it following any rules unless you specify what those rules are, in which case you just made up a different mathematical system.)
Does the identical twin one shot prisoners dilemma only work if you are functionally identical or can you be a little different and is there anything meaningful that can be said about this? (I’m interested in this one also.)