I think the reason AI and nanotech often go together in discussions of the future is summed up in this quote by John Cramer: “Nanotechnology will reduce any manufacturing problem, from constructing a vaccine that cures the common cold to fabricating a starship from the elements contained in sea water, to what is essentially a software problem.”
summerstay
When people make purchasing decisions, pricing models that are too complex make them less likely to purchase. If it’s too confusing to figure out whether something is a good deal or not, we generally tend to just assume it’s a bad deal. See http://ideas.repec.org/p/ags/ualbsp/24093.html (Choice Environment, Market Complexity and Consumer Behavior: A Theoretical and Empirical Approach for Incorporating Decision Complexity into Models of Consumer Choice), for example.
I occasionally read the blog of Scott Adams, the author of Dilbert. He claims to believe that the world is a simulation, but who can blame him? His own situation is so improbable he must cast about for some explanation. I predict that among celebrities (and the unusually successful in other fields), there is an unusually high amount of belief that just by wanting things hard enough they will come to you—because, like everyone else, they wished for something in life, but unlike most people, they actually got it.
Perhaps Columbus’s “genius” was simply to take action. I’ve noticed this in executives and higher-ranking military officers I’ve met—they get a quick view of the possibilities, then they make a decision and execute it. Sometimes it works and sometimes it doesn’t, but the success rate is a lot better than for people who never take action at all.
- Apr 10, 2013, 3:26 AM; 21 points) 's comment on Explicit and tacit rationality by (
This sort of argument was surprisingly common in the 18th and 19th century compared to today. The Federalist Papers, for example, lay out the problem as a set of premises leading inexorably to a conclusion. I find it hard to imagine a politician successfully using such a form of argument today.
At least that’s my impression; perhaps appeals to authority and emotion were just as common in the past as today but selection effects prevent me from seeing them.
I really enjoyed the first part of the post—just thinking about the fact that my future goals will be different from my present ones is a useful idea. I found the bit of hagiography about E.Y. at the end weird and not really on topic. You might just use a one or two sentence example: He wanted to build an A.I., and then later he didn’t want to.
Regarding Cyberpunk, Gibson wasn’t actually making a prediction, not in the way you’re thinking. He was always making a commentary on his own time by exaggerating certain aspects of it. See here, for instance: http://boingboing.net/2012/09/13/william-gibson-explains-why-sc.html
Great! This means that in order to develop an AI with a proper moral foundation, we just need to reduce the following statements of ethical guidance to predicate logic, and we’ll be all set:
Be excellent to each other.
Party on, dudes!
I think trying to understand organizational intelligence would be pretty useful as a way of getting a feel for the variety of possible intelligences. Organizations also have a legal standing as artificial persons, so I imagine that any AI that wanted to protect its interests through legal means would want to be incorporated. I’d like to see this explored further. Any suggestions on good books on the subject of corporations considered as AIs?
Perhaps you would suggest showing the histograms of completion times on each site, along with the 95% confidence error bars?
Can you give me a concrete course of action to take when I am writing a paper reporting my results? Suppose I have created two versions of a website, and timed 30 people completing a task on each web site. The people on the second website were faster. I want my readers to believe that this wasn’t merely a statistical coincidence. Normally, I would do a t-test to show this. What are you proposing I do instead? I don’t want a generalization like “use Bayesian statistics, ” but a concrete example of how one would test the data and report it in a paper.
Wait, there are solved problems in ethics?
I think a lot of people are misunderstanding the linked xkcd, or maybe I am. The way I see it, It’s not about misusing the word “logic.” It’s about people coming in from the outside, thinking that just because they are smart, they know how to solve problems in a field that they are completely inexperienced in, and have spent very little time thinking about compared to those who think about it as a full time job.
I thought that Randall Munroe might be talking about LW, but I wasn’t sure, so I asked if anyone else had the same impression. At least one other person did. Most people didn’t.
Look, I like Less Wrong. It’s fun. But if you want to have an influence on the world, you need to engage with the discussions the professionals are having. You need to publish in scientific journals. You need to play the game that’s out there well enough to win. I don’t think people should feel insulted by my suggesting this. Getting insulted by ideas that make us uncomfortable isn’t what I feel this place is about.
Thanks, I’ll try that.
I tried to search for it before I posted, but failed to find it. Nice to see at least one other person felt the same way on reading the comic. I feel like we as a group are sometimes guilty of trying to reinvent the wheel instead of participating in the scholarly philosophy and AI communities by publishing papers. It’s a lot easier this way, and there’s less friction, but some of this has been said before, and smart people have already thought about it.
“it doesn’t share any of the characteristics that make you object to murder of the usual sort.” I disagree—it shares the most salient aspect of murder, namely the harm it does to the future of the human being being murdered. The other features are also objectionable, but a case of murder that doesn’t have any of those features (say, the painless murder of a baby with no close acquaintances, friends or family) is still rightfully considered murder. This is why most abortion advocates (unlike the author of this article) do not consider a fetus a “human being” at all. If they did, they would have to confront this argument head on.
Interviewer: How do you answer critics who suggest that your team is playing god here?
Craig Venter: Oh… we’re not playing.
Perhaps a good place to start would be the literature on life satisfaction and happiness. Statistically speaking, what changes in life that can be made voluntarily lead to the greatest increase in life satisfaction at the least cost in effort/money/trouble?