If you cock up and define a terminal value that refers to a mutable epistemic state, all bets are off. Like Asimov’s robots on Solaria, who act in accordance with the First Law, but have ‘human’ redefined not to include non-Solarians. Oops. Trouble is that in order to evaluate how you’re doing, there has to be some coupling between values and knowledge, so you must prove the correctness of that coupling. But what is correct? Usually not too hard to define for the toy models we’re used to working with, damned hard as a general problem.
topynate
I have a comment waiting in moderation on the isteve post Konkvistador mentioned, the gist of which is that the American ban on the use of genetic data by health insurers will cause increasing adverse selection as these services get better and cheaper, and that regulatory restrictions on consumer access to that data should be seen in that light. [Edit: it was actually on the follow-up.]
A pertinent question is what problem a government or business (not including a general AI startup) may wish to solve with a general AI that is not more easily solved by developing a narrow AI. ‘Easy’ here factors in the risk of failure, which will at least be perceived as very high for a general AI project. Governments and businesses may fund basic research into general AI as part of a strategy to exploit high-risk high-reward opportunities, but are unlikely to do it in-house.
One could also try and figure out some prerequisites for a general AI, and see what would lead to them coming into play. So for instance, I’m pretty sure that a general AI is going to have long-term memory. What AIs are going to get long-term memory? A general AI is going to be able to generalize its knowledge across domains, and that’s probably only going to work properly if it can infer causation. What AIs are going to need to do that?
Consider those charities that expect their mission to take years rather than months. These charities will rationally want to spread their spending out over time. Particularly for charities with large endowments, they will attempt to use the interest on their money rather than depleting the principal, although if they expect to receive more donations over time they can be more liberal.
This means that a single donation slightly increases the rate at which such a charity does good, rather than enabling it to do things which it could not otherwise do. So the scaling factor of the endowment is restored: donating $1000 to a charity with a $10m endowment increases the rate at which it can sustainably spend by 1000/10^7 = 0.1%.
This does not mean that a charity will say, look, if our sustainable spending rate was 0.1% higher we’d have enough available this year to fund the ‘save a million kids from starvation’ project, oh well. They’ll save the million kids and spend a bit less next year, all other things being equal. In other words, the charity, by maximising the good it does with the money it has, smooths out the change in its utility for small differences in spending relative to the size of its endowment, i.e. the higher order derivatives are low. So long as the utility you get from a charity comes from it fulfilling its stated mission, your utility will also vary smoothly with small spending differences.
Likewise, with rational collaborating charities, they will each adjust their spending to increase any mutually beneficial effects. So mixed derivatives are low, too.
The upshot is that unless your donation is of a size that it can permanently and significantly raise the spending power of such a charity, you won’t be leaving the approximately linear neighbourhood in utility-space. So if you’re looking for counterexamples, you’ll need to find one of:
charities with both low endowments and low donation rates, which nevertheless can produce massive positive effects with a smallish amount of money
charities which must fulfil their mission in a short time and are just short of having the money to do so.
I don’t think you should write the post. Reason: negative externalities.
It looks like wezm has followed your suggestion, with extra hackishness—he added a new global variable.
Just filed a pull request. Easy patch, but it took a while to get LW working on my computer, to get used to the Pylons framework and to work out that articles are objects of class Link. That would be because LW is a modified Reddit.
I just gave myself a deadline to write a patch for that problem.
Edit: Done!
Task: Write a patch for the Less Wrong codebase that hides deleted/banned posts from search engines.
Deadline: Sunday, 30 January.
- Jan 26, 2011, 2:54 AM; 17 points) 's comment on A plan for spam by (
The thrust of your argument is that an agent that uses causal decision theory will defect in a one-shot Prisoner’s Dilemma.
You specify CDT when you say that
No matter what Agent_02 does, actually implementing Action_X would bear no additional value
because this implies Agent_01 looks at the causal effects of do(Action_X) and decides what to do based solely on them. Prisoner’s Dilemma because Action_X corresponds to Cooperate, and not(Action_X) to Defect, with an implied Action_Y that Agent_02 could perform that is of positive utility to Agent_01 (hence, ‘trade’). One-shot because without causal interaction between the agents, they can’t update their beliefs.
That CDT using agents unconditionally defect in the one-shot PD is old news. That you should defect against CDT using agents in the one-shot PD is also old news. So your post rather gives the impression that you haven’t done the research on the decision theories that make acausal trade interesting as a concept.
And how do you propose to stop them. Put a negative term in their reward functions?
This is a TDT-flavoured problem, I think. The process that our TDT-using FAI uses to decide what to do with an alien civilization it discovers is correlated with the process that a hypothetical TDT-using alien-Friendly AI would use on discovering our civilization. The outcome in both cases ought to be something a lot better than subjecting us/them to a fate worse than death.
If that’s the case, then when a page is hidden the metadata should be updated to remove it from the search indexes. If you search ’pandora site:lesswrong.com′ on Google, all the pages are still there, and can be followed back to LW. That is to say, the spammers are still benefiting from every piece of spam they’ve ever posted here.
All of those phenomena are caused by human action! Once you know humans exist, the existence of macroeconomics is causally screened off from any other agentic processes. All of those phenomena, collectively, aren’t any more evidence for the existence of an intelligent cause of the universe than the existence of humans: the existence of such a cause and the existence of macroeconomics are conditionally independent events, given the existence of humans.
If you don’t mind my asking, how did it come to be that you were raised to believe that convincing arguments against theism existed without discovering what they are? That sounds like a distorted reflection of a notion I had in my own childhood, when I thought that there existed a theological explanation for differences between the Bible and science but that I couldn’t learn them yet; but to my recollection I was never actually told that, I just worked it out from the other things I knew.
It’s roughly as many words as are spoken worldwide in 2.5 seconds, assuming 7450 words per person per day. It’s very probably less than the number of English words spoken in a minute. It’s also about the number of words you can expect to speak in 550 years. That means there might be people alive who’ve spoken that many words, given the variance of word-production counts.
So, a near inconceivable quantity for one person, but a minute fraction of total human communication.
//Not an economist//
The minimum wage creates a class of people who it isn’t worth hiring (their productivity is less than their cost of employment). If you have a device which raises the productivity of these guys, they can enter the workforce at minimum wage.
Additionally, there may be zero marginal product workers—workers whose cost of employment equals the marginal increase in productivity that results from hiring them. This could happen in a contracting job market if the fear of losing employment causes other workers to increase their productivity enough. Then you could fire Jack and see the productivity of John increase enough to match the productivity net of costs that Jack provided. If such workers exist, then they can provide a new source of labour even in the absence of minimum wage laws.
I agree with you that there’s a lack of economic logic in the story, though.
You can put degree requirements on the job advertisement, which should act as a filter on applications, something that can’t be caught by the 80% rule.
(Of course, universities tend to use racial criteria for admission in the US, something which, ironically, can be an incentive for companies to discriminate based on race even amongst applicants with CS degrees.)
Aha! The prophecy we just heard in chapter 96 is Old English. However, by the 1200s, when, according to canon, the Peverell brothers were born, we’re well into Middle English (which Harry might well understand on first hearing). I was beginning to wonder if there was not some old wizard or witch listening, for whom that prophecy was intended.
There’s still the problem of why brothers with an Anglo-Norman surname would have Old English as a mother tongue… well, that could happen rather easily with a Norman father and English mother, I suppose.
And the coincidence of Canon!Ignotus Peverell being born in 1214, the estimated year of Roger Bacon’s birth, seemed significant too… I shall have to go back over the chapters referring to his diary.