I’m sorry you had to go through this. I’ve been to three Catholic funerals over the past two years, and found them both to be particularly painful. I actually refused requests to perform readings, and thought about doing a eulogy like this. I didn’t, and I’m impressed that you had the courage to do so.
MinibearRex
In discussions about a month or so ago, people expressed interest in running posts by Hanson, as well as a few others (Carl Shulman and James Miller), as part of the AI FOOM Debate. This is the 12th post in that debate by someone other than Yudkowsky. There are, after today, 18 more posts in the debate left, of which 9 are by Hanson. After that, we will return to the usual practice of just rerunning Yudkowsky’s sequences.
Every now and then, the Wiki seems to decide that my IP address is spamming the Wiki, and autoblock it. Sometimes it goes away in a day or so, and sometimes it doesn’t. In the event that it doesn’t, making a new username seems to resolve the issue, for some reason. I’m currently on account number 4, named “Wellthisisaninconvenience”. Which is different from my previous account, “Thisisinconvenient”.
Perhaps there is nothing in Nature more pleasing than the study of the human mind, even in its imperfections or depravities; for, although it may be more pleasing to a good mind to contemplate and investigate the application of its powers to good purposes, yet as depravity is an operation of the same mind, it becomes at least equally necessary to investigate, that we may be able to prevent it.
-John Hunter
Don’t think, try the experiment.
-John Hunter
I think nigerweiss is asserting that “The experiment requires that you continue” activates System 1 but not System 2.
Prior probabilities seem to me to be the key idea. Essentially, young earth creationists want P(evidence|hypothesis) = ~1. The problem is that to do this, you have to make P(hypothesis) very small. Essentially, they’re overfitting the data. P(no god) and P(deceitful god) may have identical likelihood functions, but the second one is a conjunction of a lot of statements (god exists, god created the world, god created the world 4000 years ago, god wants people to believe he created the world 4000 years ago, god wants people to believe he created the world 4000 years ago despite evidence to the contrary, etc). All of these statements are an additional decrease in probability for the prior probability in the Bayesian update.
I thought the explanations were just poorly written. But given that Luke, and other seem to have reviewed it positively, I’d guess that it is substantially better than others.
Why does the table indicate that we haven’t observed pandemics the same way we’ve observed wars, famines, and earth impactors?
For what it’s worth, I haven’t found any of the Cambridge Introduction to Philosophy series to be particularly good. The general sense I have is that they’re better used as a reference if you can’t remember exactly how the professor explained something, than as a source to actually try to learn the topic independently. That being said, I haven’t read the Decision theory one, so take this with a grain of salt.
I think any message of this sort is likely to lead to some unpleasantness. “Hey, I just downvoted a whole bunch of your old posts, but it’s ok because I actually did think that all of those posts were bad.” Downvote things that deserve to get downvoted, but don’t make a scene out of it that’s just going to poison the discussion.
Are you planning to do anything like the ritual sequence again this year?
This post is by James Miller, who posted about a year ago that he was writing a book. It’s apparently out now, and seems to have received some endorsements from some recognizable figures. If there’s anyone here who’s read it, how worthwhile of a read would it be for someone already familiar with the idea of the singularity?
So if I were talking about the effect of e.g. sex as a meta-level innovation, then I would expect e.g. an increase in the total biochemical and morphological complexity that could be maintained—the lifting of a previous upper bound, followed by an accretion of information. And I might expect a change in the velocity of new adaptations replacing old adaptations.
But to get from there, to something that shows up in the fossil record—that’s not a trivial step.
I recall reading, somewhere or other, about an ev-bio controversy that ensued when one party spoke of the “sudden burst of creativity” represented by the Cambrian explosion, and wondered why evolution was proceeding so much more slowly nowadays. And another party responded that the Cambrian differentiation was mainly visible post hoc—that the groups of animals we have now, first differentiated from one another then, but that at the time the differences were not as large as they loom nowadays. That is, the actual velocity of adaptational change wasn’t remarkable by comparison to modern times, and only hindsight causes us to see those changes as “staking out” the ancestry of the major animal groups.
I’d be surprised to learn that sex had no effect on the velocity of evolution. It looks like it should increase the speed and number of substituted adaptations, and also increase the complexity bound on the total genetic information that can be maintained against mutation. But to go from there, to just looking at the fossil record and seeing faster progress—it’s not just me who thinks that this jump to phenomenology is tentative, difficult, and controversial.
Should you expect more speciation after the invention of sex, or less? The first impulse is to say “more”, because sex seems like it should increase the optimization velocity and speed up time. But sex also creates mutually reproducing populations, that share genes among themselves, as opposed to asexual lineages—so might that act as a centripetal force?
The idea that the development of sex didn’t speed up the process of speciation would, if true, be important for a certain problem I’m currently working on. Could anyone point me towards some sort of academic discussion on the subject?
As the problems get more difficult, or require more optimization, the AI has more optimization power available. That might or might not be enough to compensate for the increase in difficulty.
Thanks. I appreciate that.
What do people think of this idea? I’m personally interested in reading all of the debate, and I think I will, no matter what I wind up posting, so nobody else needs to feel lonely if they want to see all of it.
I think so, but truth be told I’ve actually never read through all of it myself. All of the bits of it I’ve seen seem to indicate that they hold similar positions in those debates to their positions in the original argument.
From the next post in the sequences:
There does exist a rare class of occasions where we want a source of “true” randomness, such as a quantum measurement device. For example, you are playing rock-paper-scissors against an opponent who is smarter than you are, and who knows exactly how you will be making your choices. In this condition it is wise to choose randomly, because any method your opponent can predict will do worse-than-average.
The wikipedia page) on the blind spot contains a good description, as well as a diagram of vertebrate eyes alongside the eye of an octopus, which does not have the same feature.