“I don’t see how such a mind could possibly do anything that we consider mind-like, in practice.”
This is a fabulous way of putting it. “In practice” may even be too strong a caveat.
“I don’t see how such a mind could possibly do anything that we consider mind-like, in practice.”
This is a fabulous way of putting it. “In practice” may even be too strong a caveat.
“There are possible minds in mind design space who have anti-Occamian and anti-Laplacian priors; they believe that simpler theories are less likely to be correct, and that the more often something happens, the less likely it is to happen again.”
You’ve been making this point a lot lately. But I don’t see any reason for “mind design space” to have that kind of symmetry. Why do you believe this? Could you elaborate on it at some point?
John Horgan is a sloppy thinker. But if this was a contest to strengthen vs. weaken the credibility of AI research—a kind of status competition—then I think he got the better of you.
Is it important to convince nonprofessionals that the singularity is plausible, in advance of it actually happening? If so, then you need to find a way to address the “this is just an apocalyptic religion” charge that Mr. Horgan brings here. It will not be the last time you hear it, and it is particularly devastating in its own somewhat illogical way. 1. All people dismiss most claims that their lives will be radically different in the near future, without giving due consideration 2. This behavior is rational! At least, it is useful, since nearly all such claims are bogus and “due consideration” is costly, 3. Your own claims can be easily caricatured as resembling millenarianist trash (singularity = rapture, etc. One of the bloggingheads commenters makes a crack about your “messianism” as a product of your jewish upbringing.)
How do you get through the spam filter? I don’t know, but “read my policy papers” sounds too much like “read my manifesto.” It doesn’t distinguish you from crazy people before they read it, so they won’t. (Mr. Horgan didn’t. Were you really surprised?) You need to find sound bites if you’re going to appear on bloggingheads at all.
In the political blogosphere, this is called “concern trolling.” Whatever.
Uh I guess what I’m trying to say is, what do you mean by that Mr. Yudkowsky?
I remember sitting there staring at the “linear operators”, trying to figure out what the hell they physically did to the eigenvectors - trying to visualize the actual events that were going on in the physical evolution—before it dawned on me that it was just a math trick to extract the average of the eigenvalues.
If anyone else had written this sentence, I would think to myself “Jeez, this guy doesn’t know what he’s talking about.” Did this whole thing start because you don’t understand linear algebra? Linear algebra 1. is an excellent formalism for quantum mechanics and 2. can be taught to high school students, provided they can visualize what matrices do to eigenvectors, i.e. scale them. In any case if you don’t know much linear algebra, this anonymous blog commenter recommends it very much. It’s really useful in all kinds of situations, even for the day to day.
But I haven’t yet sat down and formalized the exact difference—my reflective theory is something I’m trying to work out, not something I have in hand.
“The principle of induction is true” is a statement that cannot be justified. “You should use the principle of induction when thinking about the future” can be justified along the lines of Pascal’s wager. Assuming that it works in a universe where it does in fact work, one will make predictions that are more accurate than predictions chosen at random. Assuming that it works in a universe where it doesn’t work, one will not make predictions that are less accurate than predictions chosen at random. But I don’t think you can construct a Pascal-style argument in favor of “you should use induction when thinking about induction.” It would be interesting if you came up with something.