Assume that humanity managed to create a friendly AI (FAI). Given the enormous amount of resources that each human is poised to consume until the dark era of the universe, wouldn’t the same arguments that now suggest that we should contribute money to existential risk charities then suggest that we should donate our resources to the friendly AI? Our resources could enable it to find a way to either travel back in time, leave the universe or hack the matrix. Anything that could avert the end of the universe and allow the FAI to support many more agents has effectively infinite expected utility.
The connotative argument here seems to be:
A seems to be A good idea.
B belongs in the same reference class as A.
But B is absurd! Therefore,
A is absurd And so,
Don’t trust reasoning. (Or, at least, don’t trust the reasoning of people who disagree with me.)
To this I just say that no, B doesn’t belong in reference class A. Dedicating the future light cone to Neotronium (an AI dedicating all available resources to hacking the matrix) just seems like something with low expected utility so I wouldn’t do it, given the choice.
No, not absurd. I was worried that we’ll never get to the point where we actually “enjoy life” as human beings.
Don’t trust reasoning. (Or, at least, don’t trust the reasoning of people who disagree with me.)
No, that’s not what I wanted to argue. I wrote in the post that we should continue to use our best methods. We should try to solve friendly AI. I said that we should be careful and discount some of the implied utility.
Take for example the use of Bayes’ theorem. I am not saying that we shouldn’t use it, that would be crazy. What I am saying is that we should be careful in the use of such methods.
If for example you use probability theory to update on informal arguments or anecdotal evidence you are still using your intuition to assign weight to to evidence. Using math and numeric probability estimates might make you unjustifiably confident of your results because you mistakenly believe that you don’t rely on your intuition.
I am not saying that we shouldn’t use math to refine our intuition, what I am saying is that we can still be wrong by many orders of magnitutes as long as we are using our heuristics in an informal setting rather than evaluating data supplied by experimentation.
But then I thought about Bayes’ rule and realized I was wrong — even a convincing-sounding “yes” gives you some new information. In this case, H = “He thinks I’m pretty” and E = “He gave a convincing-sounding ‘yes’ to my question.” And I think it’s safe to assume that it’s easier to sound convincing if you believe what you’re saying than if you don’t, which means that P(E | H) > P(E | not-H). So a proper Bayesian reasoner encountering E should increase her credence in H.
But by how much should a proper Bayesian reasoner increase her credence in H? Bayes’ rule only tells us by how much given the input. But the variables are often filled in by our intuition.
The connotative argument here seems to be:
A seems to be A good idea.
B belongs in the same reference class as A.
But B is absurd!
Therefore,
A is absurd
And so,
Don’t trust reasoning. (Or, at least, don’t trust the reasoning of people who disagree with me.)
To this I just say that no, B doesn’t belong in reference class A. Dedicating the future light cone to Neotronium (an AI dedicating all available resources to hacking the matrix) just seems like something with low expected utility so I wouldn’t do it, given the choice.
No, not absurd. I was worried that we’ll never get to the point where we actually “enjoy life” as human beings.
No, that’s not what I wanted to argue. I wrote in the post that we should continue to use our best methods. We should try to solve friendly AI. I said that we should be careful and discount some of the implied utility.
Take for example the use of Bayes’ theorem. I am not saying that we shouldn’t use it, that would be crazy. What I am saying is that we should be careful in the use of such methods.
If for example you use probability theory to update on informal arguments or anecdotal evidence you are still using your intuition to assign weight to to evidence. Using math and numeric probability estimates might make you unjustifiably confident of your results because you mistakenly believe that you don’t rely on your intuition.
I am not saying that we shouldn’t use math to refine our intuition, what I am saying is that we can still be wrong by many orders of magnitutes as long as we are using our heuristics in an informal setting rather than evaluating data supplied by experimentation.
Take this example. Julia Galef wrote:
But by how much should a proper Bayesian reasoner increase her credence in H? Bayes’ rule only tells us by how much given the input. But the variables are often filled in by our intuition.