Unfortunately I don’t know what the methods of rationality are for computationally bounded agents, or I’d use them instead. (And it’s not for lack of effort to findout either.)
So failing that, do you think studying decision theories that assume unlimited computational resources has introduced any specific biases into my thinking that I’ve failed correct? Or any other advice on how I can do better?
So failing that, do you think studying decision theories that assume unlimited computational resources has introduced any specific biases into my thinking that I’ve failed correct?
Let me answer with a counter question. Do you think that studying decision theories increased your chance of “winning”? If yes, then there you go. Because I haven’t seen any evidence that it is useful, or will be useful, beyond the realm of philosophy. And most of it will probably be intractable or useless even for AI’s.
Or any other advice on how I can do better?
That’s up to how you define “winning”. If you you define “winning” in relation to “solving risks from AI”, then it will be almost impossible to do better. The problem is that you don’t know what to anticipate because you don’t know the correct time frame and you can’t tell how difficult any possible sub goals are. That uncertainty allows you to retrospectively claim that any failure is not because your methods are suboptimal but because the time hasn’t come or the goals were much harder than you could possible have anticipated, and thereby fool yourself into thinking that you are winning when you are actually wasting your time.
So failing that, do you think studying decision theories that assume unlimited computational resources has introduced any specific biases into my thinking that I’ve failed correct?
For example, 1) taking ideas too seriously 2) that you can approximate computationally intractable methods and use them under real life circumstances or to judge predictions like risks from AI 3) believe in the implied invisible without appropriate discounting.
A part of me wants to be happy, comfortable, healthy, respected, not work too hard, not bored, etc. Another part wants to solve various philosophical problems “soon”. Another wants to eventually become a superintelligence (or help build a superintelligence that shares my goals, or the right goals, whichever makes more sense), with as much resources under my/its control as possible, in case that turns out to be useful. I don’t know how “winning” ought to be defined, but the above seem to be my current endorsed and revealed preferences.
Do you think that studying decision theories increased your chance of “winning”?
Well, I studied it in order to solve some philosophical problems, and it certainly helped for that.
If yes, then there you go. Because I haven’t seen any evidence that it is useful, or will be useful, beyond the realm of philosophy.
I don’t think I’ve ever claimed that studying decision theory is good for making oneself generally more effective in an instrumental sense. I’d be happy as long as doing it didn’t introduce some instrumental deficits that I can’t easily correct for.
That uncertainty allows you to retrospectively claim that any failure is not because your methods are suboptimal
Suboptimal relative to what? What are you suggesting that I do differently?
For example, 1) taking ideas too seriously
I do take some ideas very seriously. If we had a method of rationality for computationally bounded agents, it would surely do the same. Do you think I’ve taken the wrong ideas too seriously, or have spent too much time thinking about ideas generally? Why?
2) that you can approximate computationally intractable methods and use them under real life circumstances or to judge predictions like risks from AI 3) believe in the implied invisible without appropriate discounting.
Can you give some examples where I’ve done 2 or 3? For example here’s what I’ve said about AI risks:
Since we don’t have good formal tools for dealing with logical and philosophical uncertainty, it seems hard to do better than to make some incremental improvements over gut instinct. One idea is to train our intuitions to be more accurate, for example by learning about the history of AI and philosophy, or learning known cognitive biases and doing debiasing exercises. But this seems insufficient to gap the widely differing intuitions people have on these questions.
My own feeling is [...]
Do you object to this? If so, what I should I have said instead?
I do take some ideas very seriously. If we had a method of rationality for computationally bounded agents, it would surely do the same. Do you think I’ve taken the wrong ideas too seriously, or have spent too much time thinking about ideas generally? Why?
This comment of yours, among others, gave me the impression that you take ideas too seriously.
You wrote:
According to the article, the AGI was almost completed, and the main reason his effort failed was that the company ran out of money due to the bursting of the bubble. Together with the anthropic principle, this seems to imply that Ben is the person responsible for the stock market crash of 2000.
This is fascinating for sure. But if you have a lot of confidence in such reasoning then I believe you do take ideas too seriously.
I agree with the rest of your comment and recognize that my perception of you was probably flawed.
Yeah, that was supposed to be a joke. I usually use smiley faces when I’m not being serious, but thought the effect of that one would be enhanced if I “kept a straight face”. Sorry for the confusion!
I see, my bad. I so far believed to be usually pretty good at detecting when someone is joking. But given what I have encountered on Less Wrong in the past, including serious treatments and discussions of the subject, I thought you were actually meaning what you wrote there. Although now I am not so sure anymore if people were actually serious on those other occasions :-)
I am going to send you a PM with an example.
Under normal circumstances I would actually regard the following statements by Ben Goertzel as sarcasm:
Of course, this faith placed in me and my team by strangers was flattering. But I felt it was largely justified. We really did have a better idea about how to make computers think. We really did know how to predict the markets using the news.
or
We AI folk were talking so enthusiastically, even the businesspeople in the company were starting to get excited. This AI engine that had been absorbing so much time and money, now it was about to bear fruit and burst forth upon the world!
I guess what I encountered here messed up my judgement by going too far in suppressing the absurdity heuristic.
But given what I have encountered on Less Wrong in the past, including serious treatments and discussions of the subject, I thought you were actually meaning what you wrote there.
The absurd part was supposed to be that Ben actually came close to building an AGI in 2000. I thought it would be obvious that I was making fun of him for being grossly overconfident.
BTW, I think some people around here do take ideas too seriously, and reports of nightmares probably weren’t jokes. But then I probably take ideas more seriously than the average person, and I don’t know on what grounds I can say that they take ideas too seriously, whereas I take them just seriously enough.
some people around here do take ideas too seriously … I don’t know on what grounds I can say that
If you ever gain a better understanding of what the grounds are on which you’re saying it, I’d definitely be interested. It seems to me that insofar as there are negative mental health consequences for people who take ideas seriously, these would be mitigated (and amplified, but more mitigated than amplified) if such people talked to each other more, which is however made more difficult by the risk that some XiXiDu type will latch onto something they say and cause damage by responding with hysteria.
One could construct a general argument of the form, “As soon as you can give me an argument why I shouldn’t take ideas seriously, I can just include that argument in my list of ideas to take seriously”. It’s unlikely to be quite that simple for humans, but still worth stating.
Unfortunately I don’t know what the methods of rationality are for computationally bounded agents, or I’d use them instead. (And it’s not for lack of effort to find out either.)
So failing that, do you think studying decision theories that assume unlimited computational resources has introduced any specific biases into my thinking that I’ve failed correct? Or any other advice on how I can do better?
Let me answer with a counter question. Do you think that studying decision theories increased your chance of “winning”? If yes, then there you go. Because I haven’t seen any evidence that it is useful, or will be useful, beyond the realm of philosophy. And most of it will probably be intractable or useless even for AI’s.
That’s up to how you define “winning”. If you you define “winning” in relation to “solving risks from AI”, then it will be almost impossible to do better. The problem is that you don’t know what to anticipate because you don’t know the correct time frame and you can’t tell how difficult any possible sub goals are. That uncertainty allows you to retrospectively claim that any failure is not because your methods are suboptimal but because the time hasn’t come or the goals were much harder than you could possible have anticipated, and thereby fool yourself into thinking that you are winning when you are actually wasting your time.
For example, 1) taking ideas too seriously 2) that you can approximate computationally intractable methods and use them under real life circumstances or to judge predictions like risks from AI 3) believe in the implied invisible without appropriate discounting.
A part of me wants to be happy, comfortable, healthy, respected, not work too hard, not bored, etc. Another part wants to solve various philosophical problems “soon”. Another wants to eventually become a superintelligence (or help build a superintelligence that shares my goals, or the right goals, whichever makes more sense), with as much resources under my/its control as possible, in case that turns out to be useful. I don’t know how “winning” ought to be defined, but the above seem to be my current endorsed and revealed preferences.
Well, I studied it in order to solve some philosophical problems, and it certainly helped for that.
I don’t think I’ve ever claimed that studying decision theory is good for making oneself generally more effective in an instrumental sense. I’d be happy as long as doing it didn’t introduce some instrumental deficits that I can’t easily correct for.
Suboptimal relative to what? What are you suggesting that I do differently?
I do take some ideas very seriously. If we had a method of rationality for computationally bounded agents, it would surely do the same. Do you think I’ve taken the wrong ideas too seriously, or have spent too much time thinking about ideas generally? Why?
Can you give some examples where I’ve done 2 or 3? For example here’s what I’ve said about AI risks:
Do you object to this? If so, what I should I have said instead?
This comment of yours, among others, gave me the impression that you take ideas too seriously.
You wrote:
This is fascinating for sure. But if you have a lot of confidence in such reasoning then I believe you do take ideas too seriously.
I agree with the rest of your comment and recognize that my perception of you was probably flawed.
Yeah, that was supposed to be a joke. I usually use smiley faces when I’m not being serious, but thought the effect of that one would be enhanced if I “kept a straight face”. Sorry for the confusion!
I see, my bad. I so far believed to be usually pretty good at detecting when someone is joking. But given what I have encountered on Less Wrong in the past, including serious treatments and discussions of the subject, I thought you were actually meaning what you wrote there. Although now I am not so sure anymore if people were actually serious on those other occasions :-)
I am going to send you a PM with an example.
Under normal circumstances I would actually regard the following statements by Ben Goertzel as sarcasm:
or
I guess what I encountered here messed up my judgement by going too far in suppressing the absurdity heuristic.
The absurd part was supposed to be that Ben actually came close to building an AGI in 2000. I thought it would be obvious that I was making fun of him for being grossly overconfident.
BTW, I think some people around here do take ideas too seriously, and reports of nightmares probably weren’t jokes. But then I probably take ideas more seriously than the average person, and I don’t know on what grounds I can say that they take ideas too seriously, whereas I take them just seriously enough.
If you ever gain a better understanding of what the grounds are on which you’re saying it, I’d definitely be interested. It seems to me that insofar as there are negative mental health consequences for people who take ideas seriously, these would be mitigated (and amplified, but more mitigated than amplified) if such people talked to each other more, which is however made more difficult by the risk that some XiXiDu type will latch onto something they say and cause damage by responding with hysteria.
One could construct a general argument of the form, “As soon as you can give me an argument why I shouldn’t take ideas seriously, I can just include that argument in my list of ideas to take seriously”. It’s unlikely to be quite that simple for humans, but still worth stating.
I’m pretty sure the bit about the stock market crash was a joke.
To be fair I think Wei_Dai was being rather whimsical with respect to the anthropic tangent!