I don’t think that anyone ever thought that paying the bank to hold your money was a theoretical impossibility—paid checking accounts are not a new thing. What is supposed to be ‘impossible’ is for bank loans have a negative interest rate—if the bank pays you to borrow money. Of course, even that was/is only ‘impossible’ with certain exceptions (specifically, deflation is bad for lenders; but they try to predict deflation, and try not to loan at a negative real rate).
Tem42
Here’s an example that did not scale well: The New York Time Magazine: Paper Boys
...using a definition that isn’t used by the target...
I suspect you may be overestimating young childrens’ critical thinking abilities. If daddy say X is “powerful medicine”, then “powerful medicine” is defined as X.
Rationality takes extra time and effort, and most people can get by without it. It is easier to go with the flow—easier on your brain, easier on your social life, and easier on your pocketbook. And worse, even if you decide you like rationality, you can’t just tune into the rationality hour on TV and do what they say—you actually have to come up with your own rationality! It’s way harder than politics, religion, or even exercise.
I think it demonstrates something stronger—we have, as humans, already developed a game (Chicken) with very meaningful outcomes in which lower intelligence is beneficial, despite the fact that the humans in questions were not intending to select for low IQ and would not have seen a rock as a valid player.
If we are talking about Chicken we do not have to assume a rock (which has no preference), but simply a human with bad judgement, or slow reactions, or who is panicking.
So,
I’m not sure what that proves.
Well, ‘proof’ aside, it demonstrates that:
stupidity can be an advantage
Among other apparently maladaptive responses.
It is just that until you explain HOW you organize this without ownership, it is impossible to determine how such a system without ownership compares to the current one.
To a a close approximation, the new system looks just like the old system, just without the paychecks. Assuming that workers know their value (big assumption), then the question becomes “to create the most Xyriking, should I do my job or change to a job producing Xyrikes?”
Caviar producers should change jobs; grain producers should not; salt producers should determine what exactly is meant by “temporarily” before making a decision.
Taking the hypothetical as it is given, I think it is fair to assume that no one will quit their job simply because it is unpleasant or because someone else could do it—those don’t really count as working together (or “pooling resources … without worrying”).
Human resources include skills like planning, logistics, common sense, and health and safety. Of course, it is possible that good planning skills are so limited that they must be devoted primarily to producing Xyrikes, and not keeping people healthy.
This is true—but the example as given assumes no centrally managed economy. It’s just a case of everyone independently deciding to maximize paperclips.
We have moved away from a complex allocation system to a simple one. It doesn’t matter if you use money—the relevant aspect of the situation is cooperation.
Of course, I may be reading to much into “everyone decides”. But I’m assuming if they all ‘decide’ to do something because they have a gun to their head, then the downside is obvious.
Oh. I had assumed that “not planning for catering” fell in the “odd cases” category, but maybe I overestimate humans.
I don’t invent time travel for another 60 years. But I will get back to you in 2075.
On a more serious note, I wasn’t wanting the deaths removed, just balanced.
Is voluntarily starving to death a drawback?
I think that under your example, there are by definition no drawbacks.
If Ed is annoyed that everyone else has gone commie, then by definition we are not talking about your hypothetical (We didn’t get ‘everyone on the planet’). So we can’t suppose drawbacks that involve ‘someone not liking it’.
Generally speaking, a situation in which everyone freely decides to work together has no downside (except in odd cases like everyone deciding to work together to burn down all the plants on Earth).
If I were to look for a realistic problem—other than probability, human nature, and logistics—it would be the word “temporarily”. I don’t imagine that coming out of the universal accord would go smoothly.
Yes.
(I see that LessWrong has twigged to the fact that this was a stupid joke and not a serious proposal, and I accept the downkarma.)
Pretty much the same sort of life as makes the death notable.
Notable deaths
This is just depressing. Can we have a notable births section?
This sort of problem is susceptible to the red queen’s race—if you solve it, that just means that lots of people will copy you, and you will find yourself in competition with a new crop of viral religions. You don’t make much long-term progress by being innovative, so perhaps your best bet is to copy some moderately successful cults and hope that power won’t corrupt you. The Hare Krishnas are a good template—just avoid the racketeering, mail fraud, murder, and child abuse, and you should do fine.
I am assuming that investment in index funds is scalable and was therefore including in my sample all long term investors in index funds. If this strategy is not scalable, I withdraw my analysis.
You can’t say “strategy A produced a better result than strategy B, therefore strategy A is a better strategy” based on a single example of someone using strategy A.
You have your example backwards.
We have a case of many many people using strategy A (index funds), and a single example of strategy B (Trump). And you can say that the strategy that worked lots of times is a better bet than the one that failed once. Strategy A is better in the limited sense that given our current information, it looks safer.
I don’t think it can completely keep track of itself—that would take its whole capacity and then some.
I have a different view of AI (I do not know if it is better or more likely). I would see the AI as a system almost entirely devoted to keeping track of itself. The theory behind a hard takeoff is that we already have pretty much all the resources to do the tasks required for a functional AI; all that is missing is the AI itself. The AI is the entity that organizes and develops the exiting resources into a more useful structure. This is not a trivial task, but it is founded on drives and goals. Assuming that we aren’t talking about a paperclip maximizer, the AI must have an active and self-modifying sense of purpose.
Humans got here the messy way—we started out as wiggly blobs wanting various critical things (light/food/sex), and it made sense to be barely better than paperclip maximizers. In the last million years we started developing systems in which maximizing the satisfaction of drives stopped being effective strategies. We have a lot of problems with mental ecology that probably derive from that.
It’s not obvious what the fundamental drives of an AI would be—it is arguable that ‘fundamental’ just doesn’t mean the same thing to an AI as it does to a biological being… except in the unlucky case that AIs are essentially an advanced form of computer virus, gobbling up all the processor time they can. But it seems that any useful AI—those AI in which we care about mental/emotional healing—would have to be first and foremost a drive/goal tuning agent, and only after that a resource management agent.
This almost has to be the case, because the set of AIs that are driven first by output and second by goal-tuning are going to be either paperclip maximizers (mental economy may be complex, but conflict will almost always be solved by the simple question “what makes more paperclips?”), insane (the state of having multiple conflicting primary drives each more compelling than the drive to correct the conflict seems to fall entirely within the set that we would consider insane, even for particularly strict definitions of insane), or below the threshold for general AI (although I admit this depends on how pessimistic your view of humans is).
Suppose it makes a large increase in its capacity. How can it tell whether or not it’s made an improvement? Or a mistake?
These are complex decisions, but not particularly damaging ones. I can’t think of any problem in this area that an AI should find inherently unhealthy. Some matters may be hard, or indeterminate, or undetermined, but it is simply a fact about living in the universe that an effective agent will have to have the mental framework for making educated guesses (and sometimes uneducated guesses), and processing the consequences without a mental breakdown.
The simple case would be having an AI predict the outcome of a coin flip without going insane—too little information, a high failure rate, and no improvement over time could drive a person insane, if they did not have the mental capacity to understand that this is simply a situation that is not under their control. Any functional AI has to have the ability to judge when a guess is necessary and to deal with that. Likewise, it has to be able to know its capability to process outcomes, and not break down when faced with an outcome that is not what it wanted, or that requires a change in thought processes, or simply cannot be interpreted with the current information.
There are certainly examples of hard problems (most of Asimovs’ stories about robots involve questions that are hard to resolve under a simple rule system), and his robots do have nervous breakdowns.… but you and I would have no trouble giving rules that would prevent a nervous breakdown. In fact, usually the rule is something simple like “if you can’t make a decision that is clearly best, rank the tied options as equal, and choose randomly”. We just don’t want to recommend that rule to beings that have the power to randomly ruin our lives—but that only becomes a problem if we are the ones setting the rules. If the AI has power over its own rule set, the problem disappears.
To analogize a human problem, suppose that an FAI starts out having to fend off capable UFAIs. It’s going to have to do extensive surveillance, which may be too much under other circumstances—a waste of resources. How does it decide how much is too much?
This is a complex question, but it is also the sort of question that breaks down nicely.
How big a threat is this? (Best guess may be not-so-good, but if AI cannot handle not-so-good guesses, AI will have a massive nervous breakdown early on, and will no longer concern us).
How much resources should I devote to a problem that big?
What is the most effective way(s) to apply those resources to that problem?
Do that thing.
Move on to the next problem.
As I write this out, I see that a large part of my argument is that AIs that do not have good mental ecology with a foundation of self-monitoring and goal/drive analysis will simply die out or go insane (or go paperclip) rather than become a healthy, interesting, and useful agent. So really, I agree that mental health is critically important, I just think that it is either in place from the start, or we have an unfriendly AI on our hands.
I realize that I may be shifting the goal posts by focusing on general AI. Please shift them back as appropriate.
It may also be useful to look for patterns in the names that are hardest for you to remember. For example, I have a harder time with longer names and names that have odd stresses. The two of these together means that I am almost guaranteed to forget a name, so those are the names that I make certain to write down and rehearse, often associating them with a simpler nickname. However, I could not identify these names as upcoming problems until I recognized the pattern.
Another, much smaller, example.
Edit:typo.