The lawyer wants both warm fuzzies and charitrons, but has conflated the two, and will probably get buzzkilled (and lose out on both measures) if the distinction is made clear. The best outcome is one where the lawyer gets to maximize both, and that happens at the end of a long road that begins with introspection about what warm fuzzies ought to mean.
DSimon
It would probably be best to just remove all questions that contain certain key phrases like “this image” or “seen here”. You’ll get a few false positives but with such a big database that’s no great loss.
Seconded on that video, it’s cheesy but very straightforward and informative.
.ie zo’oru’e uinai
.i la kristyn casnu lo lojbo tanru noi cmima
While an interesting idea, I believe most people just call this “gambling”.
I’m not sure what you’re driving at here. A gambling system where everybody has a net expected gain is still a good use of randomness.
A human running quicksort with certain expectations about its performance might require a particular distribution, but that’s not a characteristic of software.
I think this may be a distinction without a difference; modularity can also be defined as human expectations about software, namely that the software will be relatively easy to hook into a larger system.
Try wdiff
That might be a distinction without a difference; my preferences come partly from my instincts.
Here’s a hacky solution. I suspect that it is actually not even a valid solution since I’m not very familiar with the subject matter, but I’m interested in finding out why.
The relationship between one’s map and the territory is much easier to explain from outside than it is from the inside. Hypotheses about the maps of other entities can be complete as hypotheses about the territory if they make predictions based on that entity’s physical responses.
Therefore: can’t we sidestep the problem by having the AI consider its future map state as a step in the middle of its hypothetical explanation of how a some other AI in the territory would react to a given territory state? The hacky part then is to just hard-wire the AI to consider any such hypotheses as being potentially about itself, to be confirmed or disconfirmed by reflecting on its own output (perhaps via some kind of loopback).
AIUI this should allow the AI to consider any hypothesis about its own operation without requiring that it be able to deeply reflect on its own map as part of the territory, which seems to be the source of the trouble.
The next best thing to have after a reliable ally is a predictable enemy.
-- Sam Starfall, FreeFall #1516
The evaluator, which determines the meaning of expressions in a program, is just another program.
I’ve been trying very hard to read the paper at that link for a while now, but honestly I can’t figure it out. I can’t even find anything content-wise to criticize because I don’t understand what you’re trying to claim in the first place. Something about the distinction between map and territory? But what the heck does that have to do with ethics and economics? And why the (seeming?) presumption of Christianity? And what does any of that have to do with this graph-making software you’re trying to sell?
It would really help me if you could do the following:
Read this short story: http://yudkowsky.net/rational/the-simple-truth/
Please explain, using language no more complicated or technical than that used in the story, whether the idea of “truth” that the story proposes lines up with your philosophy or not, and why or why not.
Good point, probably the title should be “What is a good puzzle?” then.
That’s interesting! I’ve had very different experiences:
When I’m trying to solve a puzzle and learn that it had no good answer (i.e. was just nonsense, not even rising to the level of trick question), it’s very frustrating. It retroactively makes me unhappy about having spent all that time on it, even though I was enjoying myself at the time.
Scott Kim, What is a Puzzle?
A puzzle is fun,
and it has a right answer.
http://www.scottkim.com/thinkinggames/whatisapuzzle/
Why does the hard takeoff point have to be after the point at which an AI is as good as a typical human at understanding semantic subtlety? In order to do a hard takeoff, the AI needs to be good at a very different class of tasks than those required for understanding humans that well.
So let’s suppose that the AI is as good as a human at understanding the implications of natural-language requests. Would you trust a human not to screw up a goal like “make humans happy” if they were given effective omnipotence? The human would probably do about as well as people in the past have at imagining utopias: really badly.
So what is Mr. Turing’s computer like? It has these parts:
The long piece of paper. The paper has lines on it like the kind of paper you use in numbers class at school; the lines mark the paper up into small parts, and each part has only enough room for one number. Usually the paper starts out with some numbers already on it for the computer to work with.
The head, which reads from and writes numbers onto the paper. It can only use the space on the paper that is exactly under it; if it wants to read from or write on a different place on the paper, the whole head has to move up or down to that new place first. Also, it can only move one space at a time.
The memory. Our computers today have lots of memory, but Mr. Turing’s computer has only enough memory for one thing at a time. The thing being remembered is the “state” of the computer, like a “state of mind”.
The table, which is a plan that tells the computer what to do when it is each state. There are only so many different states that the computer might be in, and we have to put them all in the table before we run the computer, along with the next steps the computer should take when it reads different numbers in each state.
Looking closer, each line in the table has five parts, which are:
If Our State Is this
And The Number Under Head Is this
Then Our Next State Will Be this (or maybe the computer just stops here)
And The Head Should write this
And Then The Head Should move this way
Here’s a simple table:
Happy 1 Happy 1 Right Happy 2 Happy 1 Right Happy 3 Sad 3 Right Sad 1 Sad 2 Right Sad 2 Sad 2 Right Sad 3 Stop
Okay, so let’s say that we have one of Mr. Turing’s computers built with that table. It starts out in the Happy state, and its head is on the first number of a paper like this:
1 2 1 1 2 1 3 1 2 1 2 2 1 1 2 3
What will the paper look like after the computer is done? Try pretending you are the computer and see what you do! The answer is at the end.
So you can see now that the table is the plan for what the computer should do. But we still have not fixed Mr. Babbage’s problem! To make the computer do different things, we have to open it up and change the table. Since the “table” in any real computer will be made of very many parts put together very carefully, this is not a good way to do it!
So here is the amazing part that surprised everyone: you can make a great table that can act like any other table if you give it the right numbers on the paper. Some of the numbers on the paper tell the computer about a table for adding, and the rest of the numbers are to be added. The person who made the great table did not even have to know anything about adding, as long as the person who wrote the first half of the paper does.
Our computers today have tables like this great table, and so almost everything fun or important that they do is given to them long after they are built, and it is easy to change what they do.
By the way, here is how the paper from before will look after a computer with our simple table is done with it:
1 1 1 1 1 1 3 2 2 2 2 2 2 2 2 3
Mr. Turing’s Computer
Computers in the past could only do one kind of thing at a time. One computer could add some numbers together, but nothing else. Another could find the smallest of some numbers, but nothing else. You could give them different numbers to work with, but the computer would always do the same kind of thing with them.
To make the computer do something else, you had to open it up and put all its pieces back in a different way. This was very hard and slow!
So a man named Mr. Babbage thought: what if some of the numbers you gave the computer were what told it what to do? That way you could have just one computer, and you could quickly make it be a number-adding computer, or a smallest-number-finding computer, or any kind of computer you wanted, just by giving it different numbers. But although Mr. Babbage and his friend Ms. Lovelace tried very hard to make a computer like that, they could not do it.
But later a man named Mr. Turing thought up a way to make that computer. He imagined a long piece of paper with numbers written on it, and imagined a computer moving left and right that paper and reading the numbers on it, and sometimes changing the numbers. This computer could only see one number on the paper at a time, and also only remember one thing at a time, but that was enough for the computer to know what to do next. Everyone was amazed that such a simple computer could do anything that any other computer then could do; all you had to do was put the right numbers on the paper first, and then the computer could do something different! Mr. Turing’s idea was enough to let people build computers that finally acted like Mr. Babbage’s and Ms. Lovelace’s dream computer.
Even though Mr. Turing’s computer sounds way too simple when you think about our computers today, our computers can’t do anything that Mr. Turing’s imagined computer can’t. Our computers can look at many many numbers and remember many many things at the same time, but this only makes them faster than Mr. Turing’s computer, not actually different in any important way. (Though of course being fast is very important if you want to have any fun or do any real work on a computer!)
Taboo “faith”, what do you mean specifically by that term?