Yvian, I warned against granting near-thought virtues to fictional detail here. I doubt Uncle Tom’s cabin would have persuaded many slave holders against slavery; I expect well-written well-recommended anti-slavery fiction more served to signal to readers where fashionable opinion was moving.
Robin_Hanson2
Clearly, Eliezer should seriously consider devoting himself more to writing fiction. But it is not clear to me how this helps us overcome biases any more than any fictional moral dilemma. Since people are inconsistent but reluctant to admit that fact, their moral beliefs can be influenced by which moral dilemmas they consider in what order, especially when written by a good writer. I expect Eliezer chose his dilemmas in order to move readers toward his preferred moral beliefs, but why should I expect those are better moral beliefs than those of all the other authors of fictional moral dilemmas? If I’m going to read a literature that might influence my moral beliefs, I’d rather read professional philosophers and other academics making more explicit arguments. In general, I better trust explicit academic argument over implicit fictional “argument.”
I have read and considered all of Eliezer’s posts, and still disagree with him on this his grand conclusion. Eliezer, do you think the universe was terribly unlikely and therefore terribly lucky to have coughed up human-like values, rather than some other values? Or is it only in the stage after ours where such rare good values were unlikely to exist?
To be clear, Eliezer is developing a new website and will tentatively use his editor status here and there to promote some posts there to here; whether and how long that continues will depend on the quality and relevance of those posts.
People who answer survey questions seem to consistently display a pessimism bias about these large scale trends, and the equity premium puzzle also can be interpreted as people being unreasonably pessimistic about such things. So I find it hard to believe that people tend to be too optimistic about such things. If you really want to bet on the low tail of the global distribution, I guess you should listen to survivalists. If you think the US will be more down, why invest in foreign places you don’t think will be so down.
You forgot to mention—two weeks later he and all other humans were in fact deliriously happy. We can see that he at this moment did not want to later be that happy, if it came at this cost. But what will he think a year or a decade later?
Are you sure this isn’t the Eliezer concept of boring, instead of the human concept? There seem to be quite a few humans who are happy to keep winning using the same approach day after day year after year. They keep getting paid well, getting social status, money, sex, etc. To the extent they want novelty it is because such novelty is a sign of social status—a new car every year, a new girl every month, a promotion every two years, etc. It is not because they expect or want to learn something from it.
Thank you for the praise! I’ll post soon on fiction as near vs far-thinking.
It seems to me that you should take the surprising seductiveness of your imagined world that violated your abstract sensibilities as evidence that calls those sensibilities into question. I would have encouraged you to write the story, or at least write up a description of the story and what about it seemed seductive. I do think I have tried to describe how my best estimates of the future seem shocking to me, and that I would be out of place there in many ways.
It seems pretty obvious that time-scaling should work—just speed up the operation of all parts in the same proportion. A good bet is probably size-scaling, adding more parts (e.g. neurons) in the same proportion in each place, and then searching in the space of different relative sizes of each place. Clearly evolution was constrained in the speed of components and in the number of parts, so there is no obvious evolutionary reason to think such changes would not be functional.
Yeah Michael, what Eliezer said.
Even if Earth ends in a century, virtually everyone in today’s world is absolutely influential. Even if 200 folks do the same sort of work in the same office, they don’t do the exact same work, and usually that person wouldn’t be there or be paid if no one thought their work made any difference. You can even now easily identify your mark, but it is usually tedious to trace it out, and few have the patience for it.
I love it!
Virtually everyone in today’s world is influential in absolute terms, and should be respected for their unique contribution. The problem is those eager to be substantially influential in percentage terms.
Yes humans are better at dealing with groups of size 7 and 50, but I don’t think that has much to do with your complaint. You are basically noticing that you would probably be the alpha male in a tribe of 50, ruling all you surveyed, and wouldn’t that be cool. Or in a world of 5000 people you’d be one of the top 100, and everyone would know your name, and wouldn’t that be cool. Even we had better ingrown tools for dealing with larger social groups, you’d still have to face the fact that as a small creature in a vast social world, most such creatures can’t expect to be very widely known or influential.
I agree with Phil; all else equal I’d rather have whatever takes over be sentient. The moment to pause is when you make something that takes over, not so much when you wonder if it should be sentient as well.
I agree with Unknown. It seems that Eliezer’s intuitions about desirable futures differ greatly from many of the rest of us here at this blog, and mostly likely even more from the rest of humanity today. I see little evidence that we should explain this divergence as mainly due to his “having moved further toward reflective equilibrium.” Without a reason to think he will have vastly disproportionate influence, I’m having trouble seeing much point in all these posts that simply state Eliezer’s intuitions. It might be more interesting if he argued for those intuitions, engaging with existing relevant literatures, such as in moral philosophy. But what is the point of just hearing his wish lists?
Most of our choices have this sort of impact, just on a smaller scale. If you contribute a real child to the continuing genetic evolution process, if you contribute media articles that influence future perceptions, if you contribute techs that change future society, you are in effect adding to and changing the sorts of people there are and what they value, and doing so in ways you largely don’t understand.
A lot of futurists seem to come to a similar point, where they see themselves on a runaway freight train, where no one is in control, knows where we are going, or even knows much about how any particular track switch would change where we end up. They then suggest that we please please slow all this change down so we can stop and think. But that doesn’t seem a remotely likely scenario to me.
You’ve already said the friendly AI problem is terribly hard, and there’s a large chance we’ll fail to solve it in time. Why then do you keep adding these extra minor conditions on what it means to be “friendly”, making your design task all that harder? A friendly AI that was conscious and created conscious simulations to figure things out would still be pretty friendly overall.
Eliezer, as I indicate in my new post, the issue isn’t so much whether you the author judge that some fiction would help inform readers about morals, but whether typical readers can reasonably trust your judgment in such things, relative to the average propaganda content of authors writing apparently similar moral-quandary stories.