The thing I got out of it was that human brain processes appear to be able to do something (assign a nonzero probability to a non-computable universe) that our current formalization of general induction cannot do and we can’t really explain why this is.
Dre
As I understand it, it is a comparative advantage argument. More rational people are likely to have comparative advantage in making money as compared to less rational people, so the utility maximizing setup is for more rational people to make money and pay less rational people to do the day to day work of implementing the charitable organization. Thats the basic form of the argument at least.
You are right, I should have said something like “implementing MWI over some morality.”
I don’t think MWI is analogous to creating extra simultaneous copies. In MWI one maximizes the fraction of future selves experiencing good outcomes. I don’t care about parallel selves, only future selves. As you say, looking back at my self-tree I see a single path, and looking forward I have expectations about future copies, but looking sideways just sounds like daydreaming, and I don’t have place a high marginal value on that.
There is also an opportunity cost to the poor use of statistics instead of proper use. This may be only externalities (the person doing the test may actually benefit more from deception), but overall the world would be better if all statistics were used correctly.
But the important (and moral) question here is “how do we count the people for utility purposes.” We also need a normative way to aggregate their utilities, and one vote per person would need to be justified separately.
I don’t know game theory very well, but wouldn’t this only work as long as not everyone did it. Using the car example, if these contracts were common practice, you could have one for 4000 and the dealer could have one for 5000, in which case you could not reach the pareto optimum.
In general, doesn’t this infinitely regress up meta levels? Adopting precomittments is beneficial, so everyone adopts them, then pre-precomittments are beneficial… (up to some constraint from reality like being too young, although then parents might become involved)
Is this (like some of Schelling’s stuff I’ve read) more instrumental than pure game theory? I can see how this would work in the real world, but I’m not sure that it would work in theory. (Please feel free to correct any and all of my game theory)
I think the majority of people don’t evaluate AGI incentives rationally, especially failing to fully see the possibilities of it. Whereas this is an easy to imagine benefit.
Personally, pseudonymity wasn’t that helpful, its not that I didn’t want to risk my good name or something, as much as that I just didn’t want to be publicly wrong among intelligent people. Even if people didn’t know that the comment was from me per se, they were still (hypothetically) disagreeing with my ideas and I would still know that the post was mine. For me it was more hyperbolic discounting than it was rational cost-benefit analysis.
As a semi-lurker, this likely would have been very helpful for me. One problem that I had is a lack of introduction to posting. You can read everything, but its hard to learn how to post well without practice. As others have remarked, bad posts get smacked down fairly hard, so this makes it hard for people to get practice… vicious cycle. Having this could create an area where people who are not confident enough to post to the full site could get practice and confidence.
But doesn’t this make precommitting have a positive expected utility to students, so students would precommit to whatever they thought was most likely to happen and the teacher would still expect more late papers from having this policy.
I don’t know that much about the topic, but aren’t viruses more efficient at many things than normal cells? Could there be opportunities for improvement in current biological systems through better understanding of viruses?
Or create (or does one exist) some thread(s) that would be a standard place for basic questions. Having somewhere always open might be useful too.
OB has threading (although it doesn’t seem as good/ as used as on LW).
This seems like both a wonderful idea, and not mutually exclusive with the original. Having this organization could potentially increase the credibility of the entire thing, get some underdog points with the general public (although I don’t know how powerful this is for average people), and act as a backup plan.
It seems interesting that lately this site has been going through a “question definitions of reality” stage (the ai in a box boxes you., this series). It does seem to follow that going far enough to materialism leads back to something similar to Cartesian questions, but its still surprising.
My technique is get time is to say “wait” about ten times or until they stop and give me time to think. This probably won’t work for comment threads very well, but in reality not letting the person continue generally works. Probably slightly rude, but more honest and likely less logically rude, a trade-off I can often live with.
I think the first problem we have to solve is what the burden of proof is like for this discussion.
The far view says that science and reductionism have a very good record at demystifying lots of things that were thought to be unexplainable (fire, life, evolution), so the burden is on those saying the Hard Problem does not just follow from the Easy Problems. According to this, opponents of reductionism have to provide something close to a logical inconsistency with reducing conciseness. It would require huge amounts of evidence against reducing to overcome the prior for it coming from the far view.
The other side is that conciseness requires explaining a first-person experience. This view says that the reductionists have to demonstrate why science can make this new jump from only third-person explanations.
IMHO, I think that problems similar to the second view have been brought up against every major expansion of reductionism and science and have generally been proven wrong, so I vote that the burden of proof should be on those arguing against reductionism.
Whichever side ends up being right, it is important to first agree on what each side has to do to win or else each side can declare victory while agreeing on the facts.
(please note that this is my first post)
I found the phrasing in terms of evidence to be somewhat confusing in this case. I think there is some equivocating on “rationality” here and that is the root of the problem.
For P=NP, (if it or its inverse is provable) a perfect Bayesian machine will (dis)prove it eventually. This is an absolute rationality; straight rational information processing without any heuristics or biases or anything. In this sense it is “irrational” to not be able to (dis)prove P=NP ever.
But in the sense of “is this a worthwhile application of my bounded resources” rationality, for most people the answer is no. One can reasonably expect a human claiming to be “rational” to be able to correctly solve one-in-a-million-illness, but not to have (or even be able to) go through the process of solving P=NP. In terms of fulfillingone’s utility function, solving P=NP given your processing power is most likely not the most fulfilling choice (except for some computer scientists).
So we can say this person is taking the best trade-off between accuracy and work for P=NP because it requires a large amount of work, but not for one-in-a-million-illness because learning Bayes rule is very little work.
I found this (scroll down for the majority of articles) graph of all links between Eliezer’s articles a while ago, it could be be helpful. And its generally interesting to see all the interrelations.