OK, there are many people writing explanations, but if all of them are rehashing the same points from Superintelligence book, then there is not much value in that (and I’m tired of reading the same things over and over). Of course you don’t need new arguments or new evidence, but it’s still strange if there aren’t any.
Anyone who has read this FAQ and others, but isn’t a believer yet, will have some specific objections. But I don’t think everyone’s objections are unique, a better FAQ should be able to cover them, if their refutations exist to begin with.
Also, are you yourself working on AI risk? If not, why not? Is this not the most important problem of our time? Would EY not say that you should work on it? Could it be that you and him actually have wildly different estimates of P(AI doom), despite agreeing on the arguments?
As for Raemon, you’re right, I probably misunderstood why he’s unhappy with newer explanations.
are you yourself working on AI risk? If not, why not?
etc.
I presume you have no idea how enraging these questions are, because you know less than nothing about my life.
I will leave it to you to decide whether this “Average Redditor” style of behavior (look it up, it’s a Youtube character) is something you should avoid in future.
If you actually do want to work on AI risk, but something is preventing you, you can just say “personal reasons”, I’m not going to ask for details.
I understand that my style is annoying to some. Unfortunately, I have not observed polite and friendly people getting interesting answers, so I’ll have to remain like that.
Your questions opened multiple wounds, but I’ll get over it.
I “work on” AI risk, in the sense that I think about it when I can. Under better circumstances, I suspect I could make important contributions. I have not yet found a path to better circumstances.
OK, there are many people writing explanations, but if all of them are rehashing the same points from Superintelligence book, then there is not much value in that (and I’m tired of reading the same things over and over). Of course you don’t need new arguments or new evidence, but it’s still strange if there aren’t any.
Anyone who has read this FAQ and others, but isn’t a believer yet, will have some specific objections. But I don’t think everyone’s objections are unique, a better FAQ should be able to cover them, if their refutations exist to begin with.
Also, are you yourself working on AI risk? If not, why not? Is this not the most important problem of our time? Would EY not say that you should work on it? Could it be that you and him actually have wildly different estimates of P(AI doom), despite agreeing on the arguments?
As for Raemon, you’re right, I probably misunderstood why he’s unhappy with newer explanations.
etc.
I presume you have no idea how enraging these questions are, because you know less than nothing about my life.
I will leave it to you to decide whether this “Average Redditor” style of behavior (look it up, it’s a Youtube character) is something you should avoid in future.
If you actually do want to work on AI risk, but something is preventing you, you can just say “personal reasons”, I’m not going to ask for details.
I understand that my style is annoying to some. Unfortunately, I have not observed polite and friendly people getting interesting answers, so I’ll have to remain like that.
Your questions opened multiple wounds, but I’ll get over it.
I “work on” AI risk, in the sense that I think about it when I can. Under better circumstances, I suspect I could make important contributions. I have not yet found a path to better circumstances.