If you like analytic philosophy and mechanism design, consider checking out my blog.
B Jacobs
There’s just no way that things like this are remotely as effective as say GiveWell causes
Do you have any evidence for this?
and it barely even has longtermist points
Not all EA’s are longtermists.
What data and model are these estimates of the causal effects of it based on?
You can find my sources in the references section. This was based on a gallup study
Another thing that confuses me is why socialist firms need special support and don’t naturally come to dominate the economy. You seem to attribute this to owners extracting value, but that seems short-sighted; presumably if you have an economy with a mixture of socialist and non-socialist firms, and the socialist firms are much more productive, they would grow quicker and become dominant over time.
I explained this in this section:
One issue that arises with starting a socialist firms is acquiring initial investing.[27] This is probably because co-ops want to maximize income (wages), not profits. They pursue the interests of their members rather than investors and may sometimes opt to increase wages instead of profits. Capitalist firms on the other hand are explicitly investor owned so investor interests will take priority.
A socialist firm can be more productive and not dominate the economy if it’s hard to start a socialist firm.
I have a Mnemonic device for checking whether a model is Gears-like or not.
G E A R S:Does a variable Generate Empirical Anticipations?
Can a variable be Rederived?
Is a variable hard to Substitute?
There’s evidence in the form of observations of events outside the cartesian boundary. There’s evidence in internal process of reasoning, whose nature depends on the mind.
My previous comment said:
both empirical and tautological evidence
With “empirical evidence” I meant “evidence in the form of observations of events outside the cartesian boundary” and with “tautological argument” I meant “evidence in internal process of reasoning, whose nature depends on the mind”.
When doing math, evidence comes up more as a guide to intuition than anything explicitly considered. There are also metamathematical notions of evidence, rendering something evidence-like clear.
Yes, but they are both “information that indicates whether a belief is more or less valid”. Mathematical proof is also evidence, so they have the same structure. Do you have a way to ground them? Or if you somehow have a way to ground one form of proof but not the other, could you share just the one? (Since the structure is the same I suspect that the grounding of one could also be applied to the other)
I meant both empirical and tautological evidence, so general information that indicates whether a belief is more or less valid. When you say that you can keep track of truth, why do you believe you can? What is that truth based on, evidence?
Sortition Model of Moral Uncertainty
It might be interesting to distinguish between “personal hingeyness” and “utilitarian hingeyness”. Humans are not utilitarians so we care mostly about stuff that’s happening in our own lives, when we die, our personal tree stops and we can’t get more hinges. But the “utilitarian hingeyness” continues as it describes all possible utility. I made this with population ethics in mind, but you could totally use the same concept for your personal life, but then the most hingey time for you and the most hingey time for everyone will be different.
I’m not sure I understand your last paragraph, because you didn’t clarify what you meant with the word “hingeyness”? If you meant by that “the range of total amount of utility you can potentially generate” (aka hinge broadness) or “the amount by which that range shrinks” (aka hinge reduction) It is possible to draw a tree where the first tick of an 11 tick tree has just as broad of a range as an option in the 10th tick. So the hinge broadness and the hinge reduction can be just as big in the 10th as in the 1st tick, but not bigger. I don’t think you’re talking about “hinge shift”, but maybe you were talking about hinge precipiceness instead in which case, yes that can totally be bigger in the 10th tick.
If in the first image we replace the 0 with a −100 (much wider) what happens? The amount of endings for 1 is still larger than 3. The amount of branches for 1 is still larger than 3. The width of the range of the possible utility of the endings for 1 is [-100 to 8] and for 3 is [-100 to 6] (smaller). The width of the range of the total amount of utility you could generate over the future branches is [1->3->-100 = −96 up to 1->2->8= 11] for 1 and [3->-100= −97 up to 3->6= 9] for 3 (smaller). Is this a good example of what you’re trying to convey? If not could you maybe draw an example tree, to show me what you mean?
Ending in negative numbers wouldn’t change anything. The amount of endings will still shrink, the amount of branches will shrink, the range of the possible utility of the endings will still shrink or stay the same length, the range of the total amount of utility you could generate over the future branches will also shrink or stay the same length. Try it! Replace any number in any of my models with a negative number or draw your own model and see what happens.
If we draw a tree of all possible timelines (and there is an end to the tree) the older choices will always have more branches that will sprout out because of them. If we are purely looking at the possible endings then the 1 in the first image has a range of 4 possible endings, but 2 only has 2 possible endings. If we’re looking at branches then the 1 has a range of 6 possible branches, while 2 only has 2 possible branches. If we’re looking at ending utility then 1 has a range of [0-8] while 2 only has [7-8]. If we’re looking at the range of possible utility you can experience then 1 has a range from 1->3->0 = 4 utility all the way to 1->2->8 = 11 utility, while 2 only has 2->7 = 9 to 2->8 = 10.
When we talk about the utility of endings it is possible that the range doesn’t change. For example:
(I can’t post images in comments so here is a link to the image I will use to illustrate this point)
Here the “range of utility in endings” tick 1 has (the first 10) is [0-10] and the range of endings the first 0 has (tick 2) is [0-10] which is the same. Of course the probability has changed (getting an ending of 1 utility is not even an option anymore), but the minimum and maximum stay the same.
Now the width of the range of the total amount of utility you could potentially experience can also stay the same. For example the lowest utility tick 1 can experience is 10->0->0 = 10 utility and the highest is 10-0-10 = 20 utility. The difference between the lowest and highest is 10 utility. The lowest total utility that the 0 on tick 2 can experience is 0->0 = 0 utility and the highest is 0->10 = 10 utility, which is once again a difference of 10 utility. The probability has changed (ending with a weird number like 19 is impossible for tick 2). The range has also shifted downwards from [10-20] to [0-10], but the width stays the same.
It just occurred to me that some people may find the shift in range also important for hingeyness. Maybe call that ‘hinge shift’?
Crucially, in none of these definitions is it possible to end up with a wider range later down the line than when you started.
A Toy Model of Hingeyness
I know LessWrong has become less humorous over the years, but this idea popped into my head when I made my bounty comment and I couldn’t stop myself from making it. Feel free to downvote this shortform if you want the site to remain a super serious forum. For the rest of you: here is my wanted poster for the reference class problem. Please solve it, it keeps me up at night.
- By what metric do you judge a reference class? by Jun 15, 2020, 6:34 PM; 7 points) (
- Aug 18, 2020, 2:17 PM; 3 points) 's comment on Multitudinous outside views by (
Thanks for replying to my question, but although this was nicely written it doesn’t really solve the problem. So I’m putting up a $100 bounty for anyone on this site (or outside it) who can solve this problem by the end of next year. (I don’t expect it will work, but it might motivate some people to start thinking about it).
- Aug 18, 2020, 3:45 PM; 14 points) 's comment on Bob Jacobs’s Shortform by (
I’ve touched on this before, but it would be wise to take your meta-certainty into account when calibrating. It wouldn’t be hard for me to claim 99.9% accurate calibration by just making a bunch of very easy predictions (an extreme example would be buying a bunch of different dice and making predictions about how they’re going to roll). My post goes into more detail but TLDR by trying to predict how accurate your prediction is going to be you can start to distinguish between “harder” and “easier” phenomena. This makes it easier to compare different peoples calibration and allows you to check how good you really are at making predictions.
I can also “print my own code”, if I make a future version of a MRI scan I could give you all the information necessary to understand (that version of) me, but as soon as I look at it my neurological patterns change. I’m not sure what you mean with “add something to it”, but I could also give you a copy of my brain scan and add something to it. Humans and computers can of course know a summery of themselves, but never the full picture.
An annoying philosopher would ask whether you could glean knowledge of your “meta-qualia” aka what it consciously feels like to experience what something feels like. The problem is that fully understanding our own consciousness is sadly impossible. If a computer discovers that in a certain location on it’s hardware it has stored a picture of a dog, it must then store that information somewhere else, but if it subsequently tries to know everything about itself it must store that knowledge of the knowledge of the picture’s location somewhere else, which it must also learn. This repeats in a loop until the computer crashes. An essay can fully describe most things but not itself: “The author starts the essay with writing that he starts the essay with writing that...”. So annoyingly there will always be experiences that are mysterious to us.
I was not referring to the ‘billionaires being universally evil’, but to the ‘what progressives think’ part.
I was talking about the “as progressives think”
billionaires really are universally evil just as progressives think
Can you please add a quantifier when you make assertions about plurals. You can make any group sound dumb/evil by not doing it. E.g I can make atheists sound evil by saying the truthful statement: “Atheists break the law”. But that’s only because I didn’t add a quantifier like “all”, “most”, “at least one”, “a disproportionate number”, etc.
It was the second source in the post: [2]
There was a summary of it on the linked page itself:
From the study
I never claimed a massive growth advantage:
As I said, the meta-analysis’s only show a small growth advantage. If e.g a socialist firm grows with $1000 and a capitalist firm with $900, but the capitalist firm gives the $900 to the investors and the socialist firm gives $500 to both the investors and the employees, the investors can make more money with capitalist firms.