Not a relevant answer. You have given me no tools to estimate the risks or lack thereof in AI development. What methods do you use to reach conclusions on these issues? If they are good, I’d like to know them.
If you want to maximize your win, it is a relevant answer.
For the risk estimate per se, I think one needs not so much methods as a better understanding of the topic, which is attained by studying the field of artificial intelligence—in non cherry picked manner—and takes a long time. If you want easier estimate right now, you could try to estimate how privileged is the hypothesis that there is the risk. (There is no method that would let you calculate the wave from spin down and collision of orbiting black holes without spending a lot of time studying GR, applied mathematics, and computer science. Why do you think there’s a method for you to use to tackle even harder problem from first principles?)
Best yet, ban of thinking of it as risk (we have introduced, for instrumental reasons, the burden of proof on those whom say there is no risk, when it comes to new drugs etc, and we did so solely because introduction of random chemicals into a well evolved system is much more often harmful than beneficial. In general there is no reason to put burden of proof on those whom say there is no wolf, especially not when people screaming wolf get candy for doing so), and think of it as prediction of what happens in 100 years. Clearly, you would not listen to philosophers whom use ideals for predictions.
Thank you for your answer. I don’t think the methods you describe are much good for predictions. On the other hand, few methods are much good for predictions anyway.
I’ve already picked up a few online AI courses to get some background; emotionally this has made me feel that AI is likely to be somewhat less powerful than anticipated, but that it’s motivations are more certain to be more alien than I’d thought. Not sure how much weight to put on these intuitions.
Not a relevant answer. You have given me no tools to estimate the risks or lack thereof in AI development. What methods do you use to reach conclusions on these issues? If they are good, I’d like to know them.
If you want to maximize your win, it is a relevant answer.
For the risk estimate per se, I think one needs not so much methods as a better understanding of the topic, which is attained by studying the field of artificial intelligence—in non cherry picked manner—and takes a long time. If you want easier estimate right now, you could try to estimate how privileged is the hypothesis that there is the risk. (There is no method that would let you calculate the wave from spin down and collision of orbiting black holes without spending a lot of time studying GR, applied mathematics, and computer science. Why do you think there’s a method for you to use to tackle even harder problem from first principles?)
Best yet, ban of thinking of it as risk (we have introduced, for instrumental reasons, the burden of proof on those whom say there is no risk, when it comes to new drugs etc, and we did so solely because introduction of random chemicals into a well evolved system is much more often harmful than beneficial. In general there is no reason to put burden of proof on those whom say there is no wolf, especially not when people screaming wolf get candy for doing so), and think of it as prediction of what happens in 100 years. Clearly, you would not listen to philosophers whom use ideals for predictions.
Thank you for your answer. I don’t think the methods you describe are much good for predictions. On the other hand, few methods are much good for predictions anyway.
I’ve already picked up a few online AI courses to get some background; emotionally this has made me feel that AI is likely to be somewhat less powerful than anticipated, but that it’s motivations are more certain to be more alien than I’d thought. Not sure how much weight to put on these intuitions.