I don’t pretend I’ve read every refutation of Aeonios’s arguments that’s out there, but I’ve read a few. Generally, those “refutations” strike me as plausible arguments by smart people, but far from bulletproof. Thus, I think that your [DaFranker’s] attitude of “I know better so I barely have time for this” isn’t the best one.
(I’m sorry, I don’t have time to get into the details of the arguments themselves, so this post is all meta. I realize that that’s somewhat hypocritical, but “hypocrisy is the tribute vice pays to virtue” so I’m OK with that.)
Indeed, most of them are nothing but smart arguments by smart people, and have not been formally proven. However, none of the arguments for anything in AI research is formally proven, except for some very primitive mathematics and computer science stuff. Basically, at the moment all we have to go on is a lot of thought, some circumstantial “evidence” and our sets of beliefs.
All I’m saying is that, if you watch the trend, it’s much more likely (with my priors, at least) that the S.I. is “right” and that the arguments that keep being brought against it are unenlightened, in light of a few key observables; each argument against S.I. being “refuted” one after another historically, most of the critics of the S.I. not having spent nearly as much time thinking about the issues at hand and actually researching AIs, etc.
It’s not that I know better, merely that with the evidence presented to me from “both sides” (if one were to arbitrarily delimit two specific opposing factions, for simplification) and my own knowledge of the world seem to indicate towards the “S.I. side” having propositions which are much more likely to be true. I’ll admit that the end result does project that attitude, but this is mainly incidental from the fact that I actually was pressed for time when I wrote that particular post, and I did believe true that it be pointless to discuss and argument further for the benefit of an outsider that hadn’t yet read the relevant material on the topic at hand.
But in this case, “more likely to be true” means something like “a good enough argument to move my priors by roughly an order of magnitude, or two at the outside”. Since in the face of our ignorance of the future, reasonable priors could differ by several orders of magnitude, even the best arguments I’ve seen aren’t enough to dismiss any “side” as silly or not worthy of further consideration (except stuff that was obviously silly to begin with).
I was intuitively tempted to retort a bunch of things about likelyness of exception and information taken into consideration, but I realized before posting that I was actually falling victim to several biases in that train of thought. You’ve actually given me a new way to think of the issue. I’m still of the intuition that any new way to think about it will only reinforce my beliefs and support the S.I. over time, though.
For now, I’m content to concede that I was weighing too heavily on my priors and my confidence in my own knowledge of the universe (on which my posteriors for AI issues inevitably depend, in one way or another), among possibly more mistakes. However, it seems at first glance to be even more evidence for the need of a new mathematical or logical language to discuss these questions more in depth, detail and formality.
I don’t pretend I’ve read every refutation of Aeonios’s arguments that’s out there, but I’ve read a few. Generally, those “refutations” strike me as plausible arguments by smart people, but far from bulletproof. Thus, I think that your [DaFranker’s] attitude of “I know better so I barely have time for this” isn’t the best one.
(I’m sorry, I don’t have time to get into the details of the arguments themselves, so this post is all meta. I realize that that’s somewhat hypocritical, but “hypocrisy is the tribute vice pays to virtue” so I’m OK with that.)
Indeed, most of them are nothing but smart arguments by smart people, and have not been formally proven. However, none of the arguments for anything in AI research is formally proven, except for some very primitive mathematics and computer science stuff. Basically, at the moment all we have to go on is a lot of thought, some circumstantial “evidence” and our sets of beliefs.
All I’m saying is that, if you watch the trend, it’s much more likely (with my priors, at least) that the S.I. is “right” and that the arguments that keep being brought against it are unenlightened, in light of a few key observables; each argument against S.I. being “refuted” one after another historically, most of the critics of the S.I. not having spent nearly as much time thinking about the issues at hand and actually researching AIs, etc.
It’s not that I know better, merely that with the evidence presented to me from “both sides” (if one were to arbitrarily delimit two specific opposing factions, for simplification) and my own knowledge of the world seem to indicate towards the “S.I. side” having propositions which are much more likely to be true. I’ll admit that the end result does project that attitude, but this is mainly incidental from the fact that I actually was pressed for time when I wrote that particular post, and I did believe true that it be pointless to discuss and argument further for the benefit of an outsider that hadn’t yet read the relevant material on the topic at hand.
But in this case, “more likely to be true” means something like “a good enough argument to move my priors by roughly an order of magnitude, or two at the outside”. Since in the face of our ignorance of the future, reasonable priors could differ by several orders of magnitude, even the best arguments I’ve seen aren’t enough to dismiss any “side” as silly or not worthy of further consideration (except stuff that was obviously silly to begin with).
That’s a very good point.
I was intuitively tempted to retort a bunch of things about likelyness of exception and information taken into consideration, but I realized before posting that I was actually falling victim to several biases in that train of thought. You’ve actually given me a new way to think of the issue. I’m still of the intuition that any new way to think about it will only reinforce my beliefs and support the S.I. over time, though.
For now, I’m content to concede that I was weighing too heavily on my priors and my confidence in my own knowledge of the universe (on which my posteriors for AI issues inevitably depend, in one way or another), among possibly more mistakes. However, it seems at first glance to be even more evidence for the need of a new mathematical or logical language to discuss these questions more in depth, detail and formality.