What you are observing is part of the phenomenon of meta-contrarianism. Like everything Yvain writes, the aforementioned post is well worth a read.
ygert
Hmm. To me it seemed intuitively clear that the function would be monotonic.
In retrospect, this monotonicity assumption may have been unjustified. I’ll have to think more about what sort of curve this function follows.
or they could even restrict options to typical government spending.
JoshuaFox noted that the government might tack on such restrictions
That said, it’s not so clear where the borders of such restrictions would be. Obviously you could choose to allocate the money to the big budget items, like healthcare or the military. But there are many smaller things that the government also pays for.
For example, the government maintains parks. Under this scheme, could I use my tax money to pay for the improvement of the park next to my house? After all, it’s one of the many things that tax money often works towards. But if you answer affirmatively, then what if I work for some institutute that gets government funding? Could I increase the size of the government grants we get? After all, I always wanted a bigger budget...
Or what if I’m a government employee? Could I give my money to the part of government spending that is assigned as my salary?
I suppose the whole question is one of specificity. Am I allowed to give my money to a specific park, or do I have to give it to parks in general? Can I give it to a specific government employee, or do I have to give it to the salary budget of the department that employs that employee? Or do I have to give it to that department “as is”, with no restrictions on what it is spent on?
The more specitivity you add, the more abusable it is, and the more you take away, the closer it becomes to the current system. In fact, the current system is merely this exact proposal, with the specificity dial turned down to the minimum.
Think about the continuum between what we have now and the free market (where you can control exactly where your money goes), and it becomes fairly clear that the only points which have a good reason to be used are the two extreme ends. If you advocate a point in the middle, you’ll have a hard time justifying the choice of that particular point, as opposed to one further up or down.
Even formalisms like AIXI have mechanisms for long-term planning, and it is doubtful that any AI built will be merely a local optimiser that ignores what will happen in the future.
As soon as it cares about the future, the future is a part of the AI’s goal system, and the AI will want to optimize over it as well. You can make many guesses about how future AI’s will behave, but I see no reason to suspect it would be small-minded and short-sighted.
You call this trait of planning for the future “consciousness”, but this isn’t anywhere near the definition most people use. Call it by any other name, and it becomes clear that it is a property that any well designed AI (or any arbitrary AI with a reasonable goal system, even one as simple as AIXI) will have.
No, no, no: He didn’t say that you don’t have permission if you don’t steal it, only that you do have permission if you do.
What you said is true: If you take it without permission, that’s stealing, so you have permission, which means that you didn’t steal it.
However, your argument falls apart at the next step, the one you dismissed with a simple “etc.” The fact that you didn’t steal it in no way invalidates your permission, as stealing ⇒ permission, not stealing ⇔ permission, and thus it is not necessarily the case that ~stealing ⇒ ~permission.
You could use some sort of cloud service: for example, Dropbox. One of the main ideas behind of Dropbox was to have a way for multiple people to easily edit stuff collaboratively. It has a very easy user interface for such things (just keep the deck in a synced folder), and you can do it even without all the technical fiddling you’d need for git.
By observing the lack of an unusual amount of paperclips in the world which Skynet inhabits.
I have some rambling thoughts on the subject. I just hope they aren’t too stupid or obvious ;-)
Let’s take as a framework the aforementioned example of the last digit of the zillionth prime. We’ll say that the agent will be rewarded if it gets it right, on, shall we say, a log scoring rule. This means that the agent is incentivised to give the best (most accurate) probabilities it can, given the information it has. The more unreasonably confident it is, the more it loses, and the same with underconfidence.
By the way, for now I will assume the agent fully knows the scoring rule it will be judges by. It is quite possible that this assumption raises problems of its own, but I will ignore them for now.
So, the agent starts with a prior over the possible answers (a uniform prior?), and starts updating itself. But it wants to figure out how long it will spend doing so, before it should give up and hand in for grading its “good enough” answer. This is the main problem we are trying to solve here.
In the degenerate case in which it has nothing else in the universe other than this to give it utility, I actually think it is the correct answer to work forever (or as long as it can before physically falling apart) on the answer. But we shall make the opposite assumption. Let’s call the amount of utility lost to the agent as an opportunity cost in a given unit of time by the name C. (We shall also make the assumption that the agent knows what C is, at least approximately. This is perhaps a slightly more dangerous assumption, but we shall accept it for now.)
So, the agent want to work for as many units of time as it can before the marginal amount of extra utility it would earn from the scoring rule from the work of a unit time is less than C.
The only problem left is figuring out that margin. But, by the assumption that the agent knows the scoring rule, it knows the derivative of the scoring function as well. At any given point in time, it can figure out the amount of change to the potential utility it would get from the change to the probabilities it assigns. Thus, if the agent knows approximately the range in which it may update in the next step, it can figure out whether or not the next stage is worthwhile.
In other words, once it is close enough to the answer that it predicts that a marginal update would move it closer to the answer by an amount that gives less than C utility, it can quit, and not perform the next step.
This makes sense, right? I do suspect that this is the direction to drive at in the solution to this problem.
If a comment has 100% upvotes, then obviously the amount of upvotes it got is exactly equal to the karma score of the post in question.
In this writup of the 2013 Boston winter solstice celebration, there is a list of songs sung there. I would suggest this as a primary resource for populating your list.
Upvoted for explicitly noticing and noting your confusion. One of the best things about Less Wrong is that noticing the flaws in one’s own argument is respected and rewarded. (As it should be, in a community of truth-seekers.)
Good for you!
As I mentioned to you when you asked on PredictionBook, look to the media threads. These are threads specifically intended for the purpose you want: to find/share media, including podcasts/audiobooks.
I also would like to reiterate what I said on PredictionBook: I don’t think PredictionBook is really meant for this kind of question. Asking it here is fine, even good. It gives us a chance to direct you to the correct place without clogging up PredictionBook with nonpredictions.
Right. Many people use the word “utilitarianism” to refer to what is properly named “consequentialism”. This annoys me to no end, because I strongly feel that true utilitarianism is a decoherent idea (it doesn’t really work mathematically, if anyone wants me to explain further, I’ll write a post on it.)
But when these terms are used interchangeably, it gives the impression that consequentialism is tightly bound to utilitarianism, which is strictly false. Consequentialism is a very useful and elegant moral meta-system. It should not be shouldered out by utilitarianism.
In a sense, most certainly yes! In the middle ages, each fiefdom was a small city-state, controlling in its own right not all that much territory. There certainly wasn’t the concept of nationalism as we know it today. And even if some duke was technically subservient to a king, that king wasn’t issuing laws that directly impacted the duke’s land on a day to day basis.
This is unlike what we have today: We have countries that span vast areas of land, with all authority reporting back to a central government. Think of how large the US is, and think of the fact that the government in Washington DC has power over it all. That is a centralized government.
It is true that there are state governments, but they are weak. Too weak, in fact. In the US today, the federal government is the final source of authority. The president of the US has far more power over what happens in a given state than a king in the middle ages had over what happened in any feudal dukedom.
Or, prediction markets.
Same thing really, just cleaner and more elegant.
Could the article you had in mind be this?
In any case, Eliezer has touched on this point multiple times in the sequences, often as a side note in posts on other topics. (See for example in Why Our Kind Can’t Cooperate.) It’s an important point, regardless.
Yes. What I wrote was a summery, and not as perfectly detailed as one may wish. One can quibble about details: “the market”/”a market”, and those quibbles may be perfectly legitimate. Yes, one who buys S&P 500 indices is only buying shares in the large-cap market, not in all the many other things in the US (or world) economy. It would be silly to try to define a index fund as something that invests in every single thing on the face of the planet, and some indices are more diversified than others.
That said, the archetypal ideal of an index fund is that imaginary one piece of everything in the world. A fund is more “indexy” the more diversified it is. In other words, when one buys index funds, what one is buying is diversity. To a greater or lesser extent, of course, and one should buy not only the broadest index funds available, but of course also many different (non-overlapping?) index funds, if one wants to reap the full benifit of diversification.
Not an economist or otherwise particularly qualified, but these are easy questions.
I’ll answer the second one first: This advice is exactly the same as advice to hold a diversified portfolio. The concept of an index fund is a tiny little piece of each and every thing that’s on the market. The reasoning behind buying index funds is exactly the reasoning behind holding a diversified portfolio.
For the second question, remember the idea is to buy a little bit of everything, to diversify. So go meta, and buy little bits of many different index funds. But actually, as this is considered a good idea, people have made such meta-index funds, that are indices of indices, that you can buy in order to get a little bit of each index fund.
But as an index is defined as “a little bit of everything”, the question of which one fades a lot in importance. There are indices of different markets, so one might ask which market to invest in, but even there you want to go meta and diversify. (Say, with one of those meta-indices.) And yes, you want to find one with low fees, which invests as widely as possible, etc. All the standard stuff. But while fiddling with the minueta may matter, it does pale when compared to the difference between buying indices and stupidly trying to pick stocks yourself.
This is a very appropriate quote, and I upvoted. However, I would suggest formatting the quote in markdown as a quote, using “>”.
Something like this
In my opinion, this quote format is better: it makes it easier to distinguish it as a quote.
In any case, I’m sorry for nitpicking about formatting, and no offence is intended. Perhaps there is some reason I missed that explains why you put it the way you did?
Stupid mathematical nitpick:
Actually, it is more correct to say that .95 ^ 39 = 0.14.
If we calculate it out to a few more decimal places, we see that .95 ^ 39 is ~0.135275954. This is closer to 0.14 than to 0.13, and the mathematical convention is to round accordingly.