What are the prerequisites for grasping the truth when it comes to AI risks?
Ability to program is probably not sufficient, but it is definitely necessary. But not because of domain relevance; it’s necessary because programming teaches cognitive skills that you can’t get any other way, by presenting a tight feedback loop where every time you get confused, or merge concepts that needed to be distinct, or try to wield a concept without fully sharpening your understanding of it first, the mistake quickly gets thrown in your face.
And, well… it’s pretty clear from your writing that you haven’t mastered this yet, and that you aren’t going to become less confused without stepping sideways and mastering the basics first.
You mean that most cognitive skills can be taught in multiple ways, and you don’t see why those taught by programming are any different? Or do you have a specific skill taught by programming in mind, and think there’s other ways to learn it?
First, meta. It should be suspicious to see programmers claiming to posses special cognitive skills that only they can have—it’s basically a “high priesthood” claim. Besides, programming became widespread only about 30 years ago. So, which cognitive skills were very rare until that time?
Second, “presenting a tight feedback loop where … the mistake quickly gets thrown in your face” isn’t a unique-to-programming situation by any means.
Third, most cognitive skills are fairly diffuse and cross-linked. Which specific cognitive skills you can’t get any way other than programming?
I suspect that what the OP meant was “My programmer friends are generally smarter than my non-programmer friends” which is, um, a different claim :-/
I don’t think programming is the only way to build… let’s call it “reductionist humility”. Nor even necessarily the most reliable; non-software engineers probably have intuitions at least as good, for example, to say nothing of people like research-level physicists. I do think it’s the fastest, cheapest, and currently most common, thanks to tight feedback loops and a low barrier to entry.
On the other hand, most programmers—and other types of engineers—compartmentalize this sort of humility. There might even be something about the field that encourages compartmentalization, or attracts to it people that are already good at it; engineers are disproportionately likely to be religious fundamentalists, for example. Since that’s not sufficient to meet the demands of AGI problems, we probably shouldn’t be patting ourselves on the back too much here.
I might summarize it as an intuitive understanding that there is no magic, no anthropomorphism, in what you’re building; that any problems are entirely due to flaws in your specification or your model. I’m describing it in terms of humility because the hard part, in practice, seems to be internalizing the idea that you and not some external malicious agency are responsible for failures.
This is hard to cultivate directly, and programmers usually get partway there by adopting a semi-mechanistic conception of agency that can apply to the things they’re working on: the component knows about this, talks to that, has such-and-such a purpose in life. But I don’t see it much at all outside of scientists and engineers.
internalizing the idea that you and not some external malicious agency are responsible for failures.
So it’s basically responsibility?
...that any problems are entirely due to flaws in your specification or your model.
Clearly you never had to chase bugs through third-party libraries… :-) But yes, I understand what you mean, though I am not sure in which way this is a cognitive skill. I’d probably call it an attitude common to professions in which randomness or external factors don’t play a major role—sure, programming and engineering are prominent here.
You could describe it as a particular type of responsibility, but that feels noncentral to me.
Clearly you never had to chase bugs through third-party libraries...
Heh. A lot of my current job has to do with hacking OpenSSL, actually, which is by no means a bug-free library. But that’s part of what I was trying to get at by including the bit about models—and in disciplines like physics, of course, there’s nothing but third-party content.
I don’t see attitudes and cognitive skills as being all that well differentiated.
But randomness and external factors do predominate in almost everything. For that reason, applying programming skills to other domains is almost certain to be suboptimal
But randomness and external factors do predominate in almost everything.
I don’t think so, otherwise walking out of your door each morning would start a wild adventure and attempting to drive a vehicle would be an act of utter madness.
They don’t predominate overall because you have learnt how to deal with them. If there were no random or external factors in driving, you could do so with a blindfold on.
Much of the writing on this site is philosophy, and people with a technology background tend not to grok philosophy because they are accurated to answer that can be be looked up, or figured out by known methods. If they could keep the logic chops and lose the impatience, they [might make good philosophers], but they tend not to.
it’s necessary because programming teaches cognitive skills that you can’t get any other way, by presenting a tight feedback loop where every time you get confused, or merge concepts that needed to be distinct, or try to wield a concept without fully sharpening your understanding of it first, the mistake quickly gets thrown in your face.
On a complete sidenote, this is a lot of why programming is fun. I’ve also found that learning the Coq theorem-prover has exactly the same effect, to the point that studying Coq has become one of the things I do to relax.
And, well… it’s pretty clear from your writing that you haven’t mastered this yet, and that you aren’t going to become less confused without stepping sideways and mastering the basics first.
People have been telling him this for years. I doubt it will get much better.
Ability to program is probably not sufficient, but it is definitely necessary. But not because of domain relevance; it’s necessary because programming teaches cognitive skills that you can’t get any other way, by presenting a tight feedback loop where every time you get confused, or merge concepts that needed to be distinct, or try to wield a concept without fully sharpening your understanding of it first, the mistake quickly gets thrown in your face.
And, well… it’s pretty clear from your writing that you haven’t mastered this yet, and that you aren’t going to become less confused without stepping sideways and mastering the basics first.
That looks highly doubtful to me.
You mean that most cognitive skills can be taught in multiple ways, and you don’t see why those taught by programming are any different? Or do you have a specific skill taught by programming in mind, and think there’s other ways to learn it?
There are a whole bunch of considerations.
First, meta. It should be suspicious to see programmers claiming to posses special cognitive skills that only they can have—it’s basically a “high priesthood” claim. Besides, programming became widespread only about 30 years ago. So, which cognitive skills were very rare until that time?
Second, “presenting a tight feedback loop where … the mistake quickly gets thrown in your face” isn’t a unique-to-programming situation by any means.
Third, most cognitive skills are fairly diffuse and cross-linked. Which specific cognitive skills you can’t get any way other than programming?
I suspect that what the OP meant was “My programmer friends are generally smarter than my non-programmer friends” which is, um, a different claim :-/
I don’t think programming is the only way to build… let’s call it “reductionist humility”. Nor even necessarily the most reliable; non-software engineers probably have intuitions at least as good, for example, to say nothing of people like research-level physicists. I do think it’s the fastest, cheapest, and currently most common, thanks to tight feedback loops and a low barrier to entry.
On the other hand, most programmers—and other types of engineers—compartmentalize this sort of humility. There might even be something about the field that encourages compartmentalization, or attracts to it people that are already good at it; engineers are disproportionately likely to be religious fundamentalists, for example. Since that’s not sufficient to meet the demands of AGI problems, we probably shouldn’t be patting ourselves on the back too much here.
Can you expand on how do you understand “reductionist humility”, in particular as a cognitive skill?
I might summarize it as an intuitive understanding that there is no magic, no anthropomorphism, in what you’re building; that any problems are entirely due to flaws in your specification or your model. I’m describing it in terms of humility because the hard part, in practice, seems to be internalizing the idea that you and not some external malicious agency are responsible for failures.
This is hard to cultivate directly, and programmers usually get partway there by adopting a semi-mechanistic conception of agency that can apply to the things they’re working on: the component knows about this, talks to that, has such-and-such a purpose in life. But I don’t see it much at all outside of scientists and engineers.
IOW realizing that the reason why if you eat a lot you get fat is not that you piss off God and he takes revenge, as certain people appear to alieve.
So it’s basically responsibility?
Clearly you never had to chase bugs through third-party libraries… :-) But yes, I understand what you mean, though I am not sure in which way this is a cognitive skill. I’d probably call it an attitude common to professions in which randomness or external factors don’t play a major role—sure, programming and engineering are prominent here.
You could describe it as a particular type of responsibility, but that feels noncentral to me.
Heh. A lot of my current job has to do with hacking OpenSSL, actually, which is by no means a bug-free library. But that’s part of what I was trying to get at by including the bit about models—and in disciplines like physics, of course, there’s nothing but third-party content.
I don’t see attitudes and cognitive skills as being all that well differentiated.
But randomness and external factors do predominate in almost everything. For that reason, applying programming skills to other domains is almost certain to be suboptimal
I don’t think so, otherwise walking out of your door each morning would start a wild adventure and attempting to drive a vehicle would be an act of utter madness.
They don’t predominate overall because you have learnt how to deal with them. If there were no random or external factors in driving, you could do so with a blindfold on.
...
Make up your mind :-)
Predominate in almost every problem.
Don’t predominate in any solved problem.
Learning to drive is learningto deal with other traffic (external) and not knowing what is going to happen next (random)
Much of the writing on this site is philosophy, and people with a technology background tend not to grok philosophy because they are accurated to answer that can be be looked up, or figured out by known methods. If they could keep the logic chops and lose the impatience, they [might make good philosophers], but they tend not to.
Beg pardon?
On a complete sidenote, this is a lot of why programming is fun. I’ve also found that learning the Coq theorem-prover has exactly the same effect, to the point that studying Coq has become one of the things I do to relax.
People have been telling him this for years. I doubt it will get much better.