The most standard business tradeoff is Cheap vs Fast vs Good, which typically you’re only supposed to be able to get two of.
Gavin
Does anyone have experience with Inositol? It was mentioned recently on one of the better parts of the website no one should ever go to, and I just picked up a bottle of it. It seems like it might help with pretty much anything and doesn’t have any downsides . . . which makes me a bit suspicious.
In some sense I think General Intelligence may contain Rationality. We’re just playing definition games here, but I think my definitions match the general LW/Rationality Community usage.
A an agent which perfectly plays a solved game ( http://en.wikipedia.org/wiki/Solved_game ) is perfectly rational. But its intelligence is limited, because it can only accept a limited type of inputs, the states of a tic-tac-toe board, for instance.
We can certainly point to people who are extremely intelligent but quite irrational in some respects—but if you increased their rationality without making any other changes I think we would also say that they became more intelligent. If you examine their actions, you should expect to see that they are acting rationally in most areas, but have some spheres where rationality fails them.
This is because, in my definition at least:
Intelligence = Rationality + Other Stuff
So rationality is one component of a larger concept of Intelligence.
General Intelligence is the ability of an agent to take inputs from the world, compare it to a preferred state of the world (goals), and take actions that make that state of the world more likely to occur.
Rationality is how accurate and precise that agent is, relative to its goals and resources.
General Intelligence includes this, but also has concerns such as
being able to accept a wide variety of inputs
having lots of processing power
using that processing power efficiently
I don’t know if this covers it 100%, but this seems like it matches general usage to me.
I suppose if you really can’t stand the main character, there’s not much point in reading the thing.
I was somewhat aggravated by the first few chapters, in particular the conversation between Harry and McGonagall about the medical kit. Was that one where you had your aggravated reaction?
I found myself sympathizing with both sides, and wishing Harry would just shut up—and then catching myself and thinking “but he’s completely right. And how can he back down on this when lives are potentially at stake, just to make her feel better?”
I would go even further and point out how Harry’s arrogance is good for the story. Here’s my approach to this critique:
“You’re absolutely right that Harry!HPMOR is arrogant and condescending. It is a clear character flaw, and repeatedly gets in the way of his success. As part of a work of fiction, this is exactly how things should be. All people have flaws, and a story with a character with not flaws wouldn’t be interesting to read!
Harry suffers significantly due to this trait, which is precisely what a good author does with their characters.
Later on there is an entire section dedicated to Harry learning “how to lose,” and growing to not be quite as blind in this way. If his character didn’t have anywhere to develop, it wouldn’t be a very good story!”
Agreed on all points.
It sounds like we’re largely on the same page, noting that what counts as “disastrous” can be somewhat subjective.
Anytime you’re thinking about buying insurance, double check whether it actually makes more sense to self-insure. It may be better to put all the money you would otherwise put into insurance in “rainy day fund” rather than buying ten different types of insurance.
In general, if you can financially survive the bad thing, then buying insurance isn’t a good idea. This is why it almost never makes sense to insure a $1000 computer or get the “extended warranty.” Just save all the money you would spend on extended warranties on your devices, and if it breaks pay out of pocket to repair or get a new one.
This is a harshly rational view, so I certainly appreciate that some people get “peace of mind” from having insurance, which can have a real value.
In the publishing industry, it is emphatically not the case that you can sell millions of books from a random unknown author with a major marketing campaign. It’s nearly impossible to replicate that success even with an amazing book!
For all its flaws (and it has many), Fifty Shades had something that the market was ready for. Literary financial successes like this happen only a couple times a decade.
Isn’t that a necessary part of steelmanning an argument you disagree with? My understanding is that you strengthen all the parts that you can think of to strengthen, but ultimately have to leave in the bit that you think is in error and can’t be salvaged.
Once you’ve steelmanned, there should still be something that you disagree with. Otherwise you’re not steelmanning, you’re just making an argument you believe in.
If the five year old can’t understand, then I think “Yes” is a completely decent answer to this question.
If I were in this situation, I would write letters to the child to be delivered/opened as they grew older. This way I would still continue to have an active effect on their life. We “exist” to other people when we have measurable effects on them, so this would be a way to continue to love them in a unidirectional way.
That depends on whether you think that: a) the past ceases to exist as time passes, or b) the universe is all of the past and all of the future, and we just happen to experience it in a certain chronological order
The past may still be “there,” but inaccessible to us. So the answer to this question is probably to dissolve it. In one sense, I won’t still love you. In another, my love will always exist and always continue to have an effect on you.
I’m not disagreeing with the general thrust of your comment, which I think makes a lot of sense.
But the idea that an AGI must start out with the ability to parse human languages effectively is not at all required. An AGI is an alien. It might grow up with a completely different sort of intelligence, and only at the late stages of growth have the ability to interpret and model human thoughts and languages.
We consider “write fizzbuzz from a description” to be a basic task of intelligence because it is for humans. But humans are the most complicated machines in the solar system, and we are naturally good at dealing with other humans because we instinctively understand them to some extent. An AGI may be able to accomplish quite a lot before human-style intelligence can be comprehended using raw general intelligence and massive amounts of data and study.
An AI Takeover Thought Experiment
It’s hard to judge just how important it is, because I have fairly regular access to it. However, food options definitely figure into long term plans. For instance, the number of good food options around my office are a small but very real benefit that helps keep me in my current job. Similarly, while plenty of things can trump food, I would see the lack of quality food to be a major downside to volunteering to live in the first colony on Mars. Which doesn’t mean it would be decisive, of course.
I will suppress urges to eat in order to have the optimal experience at a good meal. I like to build up a real amount of hunger before I eat, as I find that a more pleasant experience than grazing frequently.
I try to respect the hedonist inside me, without allowing him to be in control. But I think I’m starting to lean pro-wireheading, so feel free to discount me on that account.
I’m pretty confident that I have a strong terminal goal of “have the physiological experience of eating delicious barbecue.” I have it in both near and far mode, and remains even when it is disadvantageous in many other ways. Furthermore, I have it much more strongly than anyone I know personally, so it’s unlikely to be a function of peer pressure.
That said, my longer term goals seem to be a web of both terminal and instrumental values. Many things are terminal goals as well as having instrumental value. Sex is a good in itself but also feeds needs other big picture psychological and social needs.
Less Wrongers voting here are primed to include how others outside of LW react to different terms in their calculations. I interpreted “best sounding” as “which will be the most effective term,” and imagine others did as well. Strategic thinking is kind of our thing.
Is the Turing Test really all that useful or important? I can easily imagine an AI powerful beyond any human intelligence that would still completely fail a few minutes of conversation with an expert.
There is so much about the human experience which is very particular to humans. Is creating an AI with a deep understanding of what certain subjective feelings are like, or niceties of social interaction? Yes, an FAI eventually needs to have complete knowledge of those, but the intermediate steps may be quite alien and mechanical, even if intelligent.
Spending a lot of time trying to fool humans into thinking that a machine can empathize with them seems almost counterproductive. I’d rather the AIs honestly relate what they are experiencing, rather than try to pretend to be human.
It would absolutely be an improvement on the current system, no argument there.
I was recently linked to this Wired article from a few months back on new results in the Bohmian interpretation of Quantum Mechanics: http://www.wired.com/2014/06/the-new-quantum-reality/
Should we be taking this seriously? The ability to duplicate the double slit experiment at classical scale is pretty impressive.
Or maybe this is still just wishful thinking trying to escape the weirdnesses of the Copenhagen and Many Worlds interpretations.