It was definitely important to make animals come, or to make it rain, tens thousands years ago. I’m getting a feeling that as I tell you that your rain making method doesn’t work, you aren’t going to give up trying if I don’t provide you with an airplane, a supply of silver iodide, flight training, runway, fuel, and so on (and even then the method will only be applicable to some days, while the pray for rain is applicable any time).
As for the best guess, if you suddenly need a best guess on a topic because someone told you of something and you couldn’t really see a major flaw in vague reasoning of the sort that can arrive at anything via a minor flaw on every step, that’s a backdoor other agents will exploit to take your money (those agents will likely also opt to modify their own beliefs somewhat, because, hell, it feels a lot better to be saving mankind than to be scamming people). What is actually important to you, is your utility, and the best reasoning here is strategic: do not leave backdoors open.
Not a relevant answer. You have given me no tools to estimate the risks or lack thereof in AI development. What methods do you use to reach conclusions on these issues? If they are good, I’d like to know them.
If you want to maximize your win, it is a relevant answer.
For the risk estimate per se, I think one needs not so much methods as a better understanding of the topic, which is attained by studying the field of artificial intelligence—in non cherry picked manner—and takes a long time. If you want easier estimate right now, you could try to estimate how privileged is the hypothesis that there is the risk. (There is no method that would let you calculate the wave from spin down and collision of orbiting black holes without spending a lot of time studying GR, applied mathematics, and computer science. Why do you think there’s a method for you to use to tackle even harder problem from first principles?)
Best yet, ban of thinking of it as risk (we have introduced, for instrumental reasons, the burden of proof on those whom say there is no risk, when it comes to new drugs etc, and we did so solely because introduction of random chemicals into a well evolved system is much more often harmful than beneficial. In general there is no reason to put burden of proof on those whom say there is no wolf, especially not when people screaming wolf get candy for doing so), and think of it as prediction of what happens in 100 years. Clearly, you would not listen to philosophers whom use ideals for predictions.
Thank you for your answer. I don’t think the methods you describe are much good for predictions. On the other hand, few methods are much good for predictions anyway.
I’ve already picked up a few online AI courses to get some background; emotionally this has made me feel that AI is likely to be somewhat less powerful than anticipated, but that it’s motivations are more certain to be more alien than I’d thought. Not sure how much weight to put on these intuitions.
It was definitely important to make animals come, or to make it rain, tens thousands years ago. I’m getting a feeling that as I tell you that your rain making method doesn’t work, you aren’t going to give up trying if I don’t provide you with an airplane, a supply of silver iodide, flight training, runway, fuel, and so on (and even then the method will only be applicable to some days, while the pray for rain is applicable any time).
As for the best guess, if you suddenly need a best guess on a topic because someone told you of something and you couldn’t really see a major flaw in vague reasoning of the sort that can arrive at anything via a minor flaw on every step, that’s a backdoor other agents will exploit to take your money (those agents will likely also opt to modify their own beliefs somewhat, because, hell, it feels a lot better to be saving mankind than to be scamming people). What is actually important to you, is your utility, and the best reasoning here is strategic: do not leave backdoors open.
Not a relevant answer. You have given me no tools to estimate the risks or lack thereof in AI development. What methods do you use to reach conclusions on these issues? If they are good, I’d like to know them.
If you want to maximize your win, it is a relevant answer.
For the risk estimate per se, I think one needs not so much methods as a better understanding of the topic, which is attained by studying the field of artificial intelligence—in non cherry picked manner—and takes a long time. If you want easier estimate right now, you could try to estimate how privileged is the hypothesis that there is the risk. (There is no method that would let you calculate the wave from spin down and collision of orbiting black holes without spending a lot of time studying GR, applied mathematics, and computer science. Why do you think there’s a method for you to use to tackle even harder problem from first principles?)
Best yet, ban of thinking of it as risk (we have introduced, for instrumental reasons, the burden of proof on those whom say there is no risk, when it comes to new drugs etc, and we did so solely because introduction of random chemicals into a well evolved system is much more often harmful than beneficial. In general there is no reason to put burden of proof on those whom say there is no wolf, especially not when people screaming wolf get candy for doing so), and think of it as prediction of what happens in 100 years. Clearly, you would not listen to philosophers whom use ideals for predictions.
Thank you for your answer. I don’t think the methods you describe are much good for predictions. On the other hand, few methods are much good for predictions anyway.
I’ve already picked up a few online AI courses to get some background; emotionally this has made me feel that AI is likely to be somewhat less powerful than anticipated, but that it’s motivations are more certain to be more alien than I’d thought. Not sure how much weight to put on these intuitions.