Wow, if that’s all you got from a post trying to explain the very real difference between acing an intelligence test by figuring things out on your own and having a machine do the same after you give it all the answers and how the suggested equations only measure how many answers were right, not how that feat was accomplished, I don’t even know how to properly respond...
Oh and by the way, in the comments I suggest how to keep track of the machine doing some learning and figuring out to Dr. Legg so there’s another thing to consider. And yes, I’ve had the formal instruction in discrete math to do so.
Wow, if that’s all you got from a post trying to explain the very real difference between acing an intelligence test by figuring things out on your own and having a machine do the same after you give it all the answers and how the suggested equations only measure how many answers were right, not how that feat was accomplished, I don’t even know how to properly respond...
It is possible that I didn’t explain my point well. The problem I am referring to is your apparent insistence that there are things that machines can’t do that people can and that this is insurmountable. Most of your subclaims are completely reasonable, but the overarching premise that machines can only do what they are programmed to seems to show up in both pieces, and is simply wrong. Even today, that’s not true by most definitions of those terms. Neural nets and genetic algorithms often don’t do what they are told.
… but the overarching premise that machines can only do what they are programmed to seems to show up in both pieces, and is simply wrong.
Only if you choose to discard any thought to how machines are actually built. There’s no magic going on in that blinking box, just ciruits performing the functions they were designed to do in the order they’re told.
Neural nets and genetic algorithms often don’t do what they are told.
Actually, they do precisely what they’re told because without a fitness function which determines what problem they are to solve in their output and their level of correctness, they just crash the computer. Don’t confuse algorithms that have very generous bounds and allow us to try different possible solutions to the same problem for some sort of thinking or initiative on the computer’s part. And when computers do something weird, it’s because of a bug which sends them persuing their logic in whays programmers never intended, not because they decide to go off on their own.
I can’t tell you how many seemingly bizarre and ridiculous problems I’ve eventually tracked down to a bad loop, or a bad index value, or a missing symbol in a string...
Only if you choose to discard any thought to how machines are actually built. There’s no magic going on in that blinking box, just ciruits performing the functions they were designed to do in the order they’re told.
There’s no magic going on inside the two pounds of fatty tissue inside my skull either. Magic is apparently not required for creativity or initiative (whatever those may be).
Actually, they do precisely what they’re told because without a fitness function which determines what problem they are to solve in their output and their level of correctness, they just crash the computer. Don’t confuse algorithms that have very generous bounds and allow us to try different possible solutions to the same problem for some sort of thinking or initiative on the computer’s part.
I’m confused by what you mean by “thinking” and “initiative.” Let’s narrow the field slightly. Would the ability to come up with new definitions and conjectures in math be an example of thinking and initiative?
And when computers do something weird, it’s because of a bug which sends them persuing their logic in whays programmers never intended, not because they decide to go off on their own.
Calling something a bug doesn’t change the nature of what is happening. That’s just a label. Humans are likely as smart as they are due to runaway sexual selection for intelligence. And then humans got really smart and realized that they could have all the pleasure of sex while avoiding the hassle of reproduction. Is the use of birth-control an example of human initiative or a bug? Does it make a difference?
Would the ability to come up with new definitions and conjectures in math be an example of thinking and initiative?
Yes, but with a caveat. I could teach an ANN how to solve a problem but it would be more or less by random trial and error with a squashing function until each “neuron” has the right weight and activation function. So it will learn how to solve this generic problem, but it won’t be because it traced its way along all the steps.
(Actually I made in mistake in my previous reply, ANNs have no fitness function, that’s a genetic algorithm. ANNs are given an input and a desired output.)
So if you develop a new defintion or conjecture and can state why and how you did it, then develop a proof, you’ve shown thought. Your attempt to suddenly create a new definition or theorem just because you wanted to and were curious rather than just tasked to do it would be initiative.
Calling something a bug doesn’t change the nature of what is happening. That’s just a label.
No, you see, a bug is when a computer does something it’s not supposed to do and handles its data incorrectly. Birth control is actually another approach to reproduction most of the time, delaying progeny until we feel ready to raise them. Those who don’t have children have put their evolutionary desire to provide for themselves above the drive to reproduce and counter that urge with protected sex. So it’s not really a bug as it is a solution to some of the problems posed by reproduction. Now, celibacy is something I’d call a bug and we know from many studies that it’s almost always a really bad idea to forgo sex altogether. Mental health tends to suffer greatly.
So if you develop a new defintion or conjecture and can state why and how you did it, then develop a proof, you’ve shown thought. Your attempt to suddenly create a new definition or theorem just because you wanted to and were curious rather than just tasked to do it would be initiative.
Hmm, so would a grad student who is thinking about a thesis problem because their advisor said to think about it be showing initiative? Is a professional mathematician showing initiative? They keep thinking about math because that’s what gives them positive feedback (e.g. salary, tenure, positive remarks from their peers).
No, you see, a bug is when a computer does something it’s not supposed to do and handles its data incorrectly
Is “incorrectly” a normative or descriptive term? .How is it different than “this program didn’t do what I expected it to do” other than that you label it a bug when the program deviates more from what you wanted to accomplish? Keep in mind that what a human wants isn’t a notion that cleaves reality at the joints.
Birth control is actually another approach to reproduction most of the time, delaying progeny until we feel ready to raise them. Those who don’t have children have put their evolutionary desire to provide for themselves above the drive to reproduce and counter that urge with protected sex. So it’s not really a bug as it is a solution to some of the problems posed by reproduction.
Ok. So when someone (and I know quite a few people in this category) deliberately uses birth control because they want the pleasure of sex but don’t want to ever have kids, is that a bug in your view?
Hmm, so would a grad student who is thinking about a thesis problem because their advisor said to think about it be showing initiative?
Dis he/she volunteer to work on a problem and come to the advisor saying that this is the thesis subject? Doesn’t sound like it, so I’d say it’s not. Initiative is doing something that’s not required, but something you feel needs to be done or something you want to do.
Is “incorrectly” a normative or descriptive term?
Yes. When you need it to return “A” and it retuns “Finland,” it made a mistake which has to be fixed. How it came to that mistake can be found by tracing the logic after the bug manifests itself.
Keep in mind that what a human wants isn’t a notion that cleaves reality at the joints.
Ok, whan you build a car but the car doesn’t start, I don’t think you’re going to say that the car is just doing what it wants and we humans are just selfishly insisting that it bends to our whims. You’re probably going to take that thing to a mechanic. Same thing with computers, even AI. If you build an AI to learn a language and it doesn’t seem to be able to do so, there’s a bug in the system.
So when someone (and I know quite a few people in this category) deliberately uses birth control because they want the pleasure of sex but don’t want to ever have kids, is that a bug in your view?
That’s answered in the second sentence of the quote you chose...
Dis he/she volunteer to work on a problem and come to the advisor saying that this is the thesis subject? Doesn’t sound like it, so I’d say it’s not. Initiative is doing something that’s not required, but something you feel needs to be done or something you want to do.
Ok. Now, if said grad student did come to the thesis adviser, but their motivation was that they’ve been taught from a very young age that they should do math. Is there initiative?
Ok, whan you build a car but the car doesn’t start, I don’t think you’re going to say that the car is just doing what it wants and we humans are just selfishly insisting that it bends to our whims. You’re probably going to take that thing to a mechanic. Same thing with computers, even AI. If you build an AI to learn a language and it doesn’t seem to be able to do so, there’s a bug in the system.
It seems that a large part of the disagreement is implicit premises here. You seem to be focused on very narrow AI, when the entire issue is what happens when one doesn’t have narrow AI but have AI that has most capabilities that humans have. Let’s set aside whether or not we should build such AIs and whether or not they are possible. Assuming that such entities are possible, do you or do you not think there’s a risk of the AI getting out of control.
So when someone (and I know quite a few people in this category) deliberately uses birth control because they want the pleasure of sex but don’t want to ever have kids, is that a bug in your view?
That’s answered in the second sentence of the quote you chose...
Either there’s a miscommunication here or there’s a misunderstanding about how evolution works. An organism that puts its own survival over reproducing is an evolutionary dead end. Historically, lots of humans didn’t want any children, but they didn’t have effective birth control methods, so in the ancestral environment there was minimal evolutionary incentive to remove that preference. It has only been recently that there is widespread and effective birth control. So, what you’ve said is one evolved desire overriding another would still seem to be a bug.
Now, if said grad student did come to the thesis adviser, but their motivation was that they’ve been taught from a very young age that they should do math. Is there initiative?
Not sure. You could argue both points in this situation.
Assuming that such entities are possible, do you or do you not think there’s a risk of the AI getting out of control.
Any AI can get out of control. I never denied that. My issue is with how that should be managed, not whether it can happen.
So, what you’ve said is one evolved desire overriding another would still seem to be a bug.
Can you clarify how it’s helpful to know that my machine only does what it’s been told to do, if I can’t know what I’m telling it to do or be certain what I have told it to do?
I mean, there’s a sense in which humans only do “what they’ve been told to do”, also… we have programs embedded in DNA that manifest themselves in brains that construct minds from experience in constrained ways. (Unless you believe in some kind of magic free will in human minds, in which case this line of reasoning won’t seem sensible to you.) But so what? Knowing that doesn’t make humans harmless.
Additionally, a big part of what SIAI types emphasize is that knowing very precisely and very broadly (at the same time) what humans want is very important. Human desires are very complex, so this is not a simple task.
Can you clarify how it’s helpful to know that my machine only does what it’s been told to do, if I can’t know what I’m telling it to do or be certain what I have told it to do?
If you have no idea what you want your AI to do, why are you building it in the first place? I have never built an app that does, you know, anything and whatever. It’ll just be muddled mess that probably won’t even compile.
we have programs embedded in DNA that manifest themselves in brains...
No we do not. This is not how biology works. Brains are self-organizing structures built by a combination of cellular signals and environmental cues. All that DNA does is to regulate what proteins the cell will manufacture. Development goes well beyond that.
If you have no idea what you want your AI to do, why are you building it in the first place?
I’m not sure how you got from my question to your answer. I’m not talking at all about programmers not having intentions, and I agree with you that in pretty much all cases they do have intentions.
I’ll assume that I wasn’t clear, rather than that you’re willing to ignore what’s actually being said in favor of what lets you make a more compelling argument, and will attempt to be clearer.
You keep suggesting that there’s no reason to worry about how to constrain the behavior of computer programs, because computer programs can only do what they are told to do.
At the same time, you admit that computer programs sometimes do things their programmers didn’t intend for them to do. I might have written a stupid bug that causes the program to delete the contents of my hard drive, for example.
I agree completely that, in doing so, it is merely doing what I told it to do: I’m the one who wrote that stupid bug, it didn’t magically come out of nowhere, the program doesn’t have any mysterious kind of free will or anything. It’s just a program I wrote.
But I don’t see why that should be particularly reassuring. The fact remains that the contents of my hard drive are deleted, and I didn’t want them to be. That I’m the one who told the program to delete them makes no difference I care about; far more salient to me is that I didn’t intend for the program to delete them.
And the more a program is designed to flexibly construct strategies for achieving particular goals in the face of unpredictable environments, the harder it is to predict what it is that I’m actually telling my program to do, regardless of what I intend for it to do.
In other words: “I can’t know what I’m telling it to do or be certain what I have told it to do.”
Sure, once it deletes the files, I can (in principle) look back over the source code and say “Oh, I see why that happened.” But that doesn’t get me my files back.
Brains are self-organizing structures built by a combination of cellular signals and environmental cues. All that DNA does is to regulate what proteins the cell will manufacture. Development goes well beyond that.
And yet, remarkably, brains don’t “self-organize” in the absence of that regulation.
You’re right, of course, that the correct environment is also crucial; DNA won’t magically turn into a brain without a very specific environment in which to manifest.
Then again, source code won’t magically turn into a running program without a very specific environment either, and quite a lot of the information defining that running program comes from the compiler and the hardware platform rather than the source code… and yet we have no significant difficulty equating a running program with its source code.
(Sure, sometimes bugs turn out to be in the compiler or the hardware, but even halfway competent programmers don’t look there except as a matter of last resort. If the running program is doing something I didn’t intend, it’s most likely that the source code includes an instruction I didn’t intend to give.)
You keep suggesting that there’s no reason to worry about how to constrain the behavior of computer programs, because computer programs can only do what they are told to do.
No, I just keep saying that we don’t need to program them to “like rewards and fear punishments” and train them like we’d train dogs.
I agree completely that, in doing so, it is merely doing what I told it to do: I’m the one who wrote that stupid bug, it didn’t magically come out of nowhere, the program doesn’t have any mysterious kind of free will or anything. It’s just a program I wrote. But I don’t see why that should be particularly reassuring.
Oh no, it’s not. I have several posts on my blog detailing how bugs like that could actually turn a whole machine army against us and turn Terminator into a reality rather than a cheesy robots-take-over-the-world-for-shits-and-giggles flick.
… and yet we have no significant difficulty equating a running program with its source code.
But the source code isn’t like DNA in an organism. Source code covers so much more ground than that. Imagine having an absolute blueprint of how every cell cluster in your body will react to any stimuli through your entire life and every process it will undertake from now until your death, including how it will age. That would be source code. Your DNA is not ever nearly that complete. It’s more like a list of suggestions and blueprints for raw materials.
No, I just keep saying that we don’t need to program them to “like rewards and fear punishments” and train them like we’d train dogs.
(shrug) OK, fair enough.
I agree with you that reward/punishment conditioning of software is a goofy idea.
I was reading your comment here to indicate that we can constrain the behavior of human-level AGIs by just putting appropriate constraints in the code. (“You don’t want the machine to do something? Put in a boundry. [..] with a machine, you can just tell it not to do that.”)
I think that idea is importantly wrong, which is why I was responding to it, but if you don’t actually believe that then we apparently don’t have a disagreement.
Re: source code… if we’re talking about code that is capable of itself generating executable code as output in response to situations that arise (which seems implicit in the idea of a human-level AGI, given that humans are capable of generating executable code), it isn’t at all clear to me that its original source code comprises in any kind of useful way an absolute blueprint for how every part of it will react to any stimuli.
Again, sure, I’m not positing magic: whatever it does, it does because of the interaction between its source code and the environment in which it runs, there’s no kind of magic third factor. So, sure, given the source code and an accurate specification of its environment (including its entire relevant history), I can in principle determine precisely what it will do. Absolutely agreed. (Of course, in practice that might be so complicated that I can’t actually do it, but you aren’t claiming otherwise.)
If you don’t think the same is true of humans, then we disagree about humans, but I think that’s incidental.
… if we’re talking about code that is capable of itself generating executable code as output in response to situations that arise
Again, it really shouldn’t be doing that. It should have the capacity to learn new skills and build new neural networks to do so. That doesn’t require new code, it just requires a routine to initialize a new set of ANN objects at runtime.
If it somehow follows from that that there’s an absolute blueprint in it for how every part of it will react to any stimuli in a way that is categorically different from how human genetics specify how humans will respond to any environment, then I don’t follow the connection… sorry. I have only an interested layman’s understanding of ANNs.
Wow, if that’s all you got from a post trying to explain the very real difference between acing an intelligence test by figuring things out on your own and having a machine do the same after you give it all the answers and how the suggested equations only measure how many answers were right, not how that feat was accomplished, I don’t even know how to properly respond...
Oh and by the way, in the comments I suggest how to keep track of the machine doing some learning and figuring out to Dr. Legg so there’s another thing to consider. And yes, I’ve had the formal instruction in discrete math to do so.
It is possible that I didn’t explain my point well. The problem I am referring to is your apparent insistence that there are things that machines can’t do that people can and that this is insurmountable. Most of your subclaims are completely reasonable, but the overarching premise that machines can only do what they are programmed to seems to show up in both pieces, and is simply wrong. Even today, that’s not true by most definitions of those terms. Neural nets and genetic algorithms often don’t do what they are told.
Only if you choose to discard any thought to how machines are actually built. There’s no magic going on in that blinking box, just ciruits performing the functions they were designed to do in the order they’re told.
Actually, they do precisely what they’re told because without a fitness function which determines what problem they are to solve in their output and their level of correctness, they just crash the computer. Don’t confuse algorithms that have very generous bounds and allow us to try different possible solutions to the same problem for some sort of thinking or initiative on the computer’s part. And when computers do something weird, it’s because of a bug which sends them persuing their logic in whays programmers never intended, not because they decide to go off on their own.
I can’t tell you how many seemingly bizarre and ridiculous problems I’ve eventually tracked down to a bad loop, or a bad index value, or a missing symbol in a string...
There’s no magic going on inside the two pounds of fatty tissue inside my skull either. Magic is apparently not required for creativity or initiative (whatever those may be).
I’m confused by what you mean by “thinking” and “initiative.” Let’s narrow the field slightly. Would the ability to come up with new definitions and conjectures in math be an example of thinking and initiative?
Calling something a bug doesn’t change the nature of what is happening. That’s just a label. Humans are likely as smart as they are due to runaway sexual selection for intelligence. And then humans got really smart and realized that they could have all the pleasure of sex while avoiding the hassle of reproduction. Is the use of birth-control an example of human initiative or a bug? Does it make a difference?
Yes, but with a caveat. I could teach an ANN how to solve a problem but it would be more or less by random trial and error with a squashing function until each “neuron” has the right weight and activation function. So it will learn how to solve this generic problem, but it won’t be because it traced its way along all the steps.
(Actually I made in mistake in my previous reply, ANNs have no fitness function, that’s a genetic algorithm. ANNs are given an input and a desired output.)
So if you develop a new defintion or conjecture and can state why and how you did it, then develop a proof, you’ve shown thought. Your attempt to suddenly create a new definition or theorem just because you wanted to and were curious rather than just tasked to do it would be initiative.
No, you see, a bug is when a computer does something it’s not supposed to do and handles its data incorrectly. Birth control is actually another approach to reproduction most of the time, delaying progeny until we feel ready to raise them. Those who don’t have children have put their evolutionary desire to provide for themselves above the drive to reproduce and counter that urge with protected sex. So it’s not really a bug as it is a solution to some of the problems posed by reproduction. Now, celibacy is something I’d call a bug and we know from many studies that it’s almost always a really bad idea to forgo sex altogether. Mental health tends to suffer greatly.
Hmm, so would a grad student who is thinking about a thesis problem because their advisor said to think about it be showing initiative? Is a professional mathematician showing initiative? They keep thinking about math because that’s what gives them positive feedback (e.g. salary, tenure, positive remarks from their peers).
Is “incorrectly” a normative or descriptive term? .How is it different than “this program didn’t do what I expected it to do” other than that you label it a bug when the program deviates more from what you wanted to accomplish? Keep in mind that what a human wants isn’t a notion that cleaves reality at the joints.
Ok. So when someone (and I know quite a few people in this category) deliberately uses birth control because they want the pleasure of sex but don’t want to ever have kids, is that a bug in your view?
Dis he/she volunteer to work on a problem and come to the advisor saying that this is the thesis subject? Doesn’t sound like it, so I’d say it’s not. Initiative is doing something that’s not required, but something you feel needs to be done or something you want to do.
Yes. When you need it to return “A” and it retuns “Finland,” it made a mistake which has to be fixed. How it came to that mistake can be found by tracing the logic after the bug manifests itself.
Ok, whan you build a car but the car doesn’t start, I don’t think you’re going to say that the car is just doing what it wants and we humans are just selfishly insisting that it bends to our whims. You’re probably going to take that thing to a mechanic. Same thing with computers, even AI. If you build an AI to learn a language and it doesn’t seem to be able to do so, there’s a bug in the system.
That’s answered in the second sentence of the quote you chose...
Ok. Now, if said grad student did come to the thesis adviser, but their motivation was that they’ve been taught from a very young age that they should do math. Is there initiative?
It seems that a large part of the disagreement is implicit premises here. You seem to be focused on very narrow AI, when the entire issue is what happens when one doesn’t have narrow AI but have AI that has most capabilities that humans have. Let’s set aside whether or not we should build such AIs and whether or not they are possible. Assuming that such entities are possible, do you or do you not think there’s a risk of the AI getting out of control.
Either there’s a miscommunication here or there’s a misunderstanding about how evolution works. An organism that puts its own survival over reproducing is an evolutionary dead end. Historically, lots of humans didn’t want any children, but they didn’t have effective birth control methods, so in the ancestral environment there was minimal evolutionary incentive to remove that preference. It has only been recently that there is widespread and effective birth control. So, what you’ve said is one evolved desire overriding another would still seem to be a bug.
Not sure. You could argue both points in this situation.
Any AI can get out of control. I never denied that. My issue is with how that should be managed, not whether it can happen.
I suppose it would.
Ah. In that case, there’s actually very minimal disagreement.
Can you clarify how it’s helpful to know that my machine only does what it’s been told to do, if I can’t know what I’m telling it to do or be certain what I have told it to do?
I mean, there’s a sense in which humans only do “what they’ve been told to do”, also… we have programs embedded in DNA that manifest themselves in brains that construct minds from experience in constrained ways. (Unless you believe in some kind of magic free will in human minds, in which case this line of reasoning won’t seem sensible to you.) But so what? Knowing that doesn’t make humans harmless.
Additionally, a big part of what SIAI types emphasize is that knowing very precisely and very broadly (at the same time) what humans want is very important. Human desires are very complex, so this is not a simple task.
If you have no idea what you want your AI to do, why are you building it in the first place? I have never built an app that does, you know, anything and whatever. It’ll just be muddled mess that probably won’t even compile.
No we do not. This is not how biology works. Brains are self-organizing structures built by a combination of cellular signals and environmental cues. All that DNA does is to regulate what proteins the cell will manufacture. Development goes well beyond that.
I’m not sure how you got from my question to your answer. I’m not talking at all about programmers not having intentions, and I agree with you that in pretty much all cases they do have intentions.
I’ll assume that I wasn’t clear, rather than that you’re willing to ignore what’s actually being said in favor of what lets you make a more compelling argument, and will attempt to be clearer.
You keep suggesting that there’s no reason to worry about how to constrain the behavior of computer programs, because computer programs can only do what they are told to do.
At the same time, you admit that computer programs sometimes do things their programmers didn’t intend for them to do. I might have written a stupid bug that causes the program to delete the contents of my hard drive, for example.
I agree completely that, in doing so, it is merely doing what I told it to do: I’m the one who wrote that stupid bug, it didn’t magically come out of nowhere, the program doesn’t have any mysterious kind of free will or anything. It’s just a program I wrote.
But I don’t see why that should be particularly reassuring. The fact remains that the contents of my hard drive are deleted, and I didn’t want them to be. That I’m the one who told the program to delete them makes no difference I care about; far more salient to me is that I didn’t intend for the program to delete them.
And the more a program is designed to flexibly construct strategies for achieving particular goals in the face of unpredictable environments, the harder it is to predict what it is that I’m actually telling my program to do, regardless of what I intend for it to do.
In other words: “I can’t know what I’m telling it to do or be certain what I have told it to do.”
Sure, once it deletes the files, I can (in principle) look back over the source code and say “Oh, I see why that happened.” But that doesn’t get me my files back.
And yet, remarkably, brains don’t “self-organize” in the absence of that regulation.
You’re right, of course, that the correct environment is also crucial; DNA won’t magically turn into a brain without a very specific environment in which to manifest.
Then again, source code won’t magically turn into a running program without a very specific environment either, and quite a lot of the information defining that running program comes from the compiler and the hardware platform rather than the source code… and yet we have no significant difficulty equating a running program with its source code.
(Sure, sometimes bugs turn out to be in the compiler or the hardware, but even halfway competent programmers don’t look there except as a matter of last resort. If the running program is doing something I didn’t intend, it’s most likely that the source code includes an instruction I didn’t intend to give.)
No, I just keep saying that we don’t need to program them to “like rewards and fear punishments” and train them like we’d train dogs.
Oh no, it’s not. I have several posts on my blog detailing how bugs like that could actually turn a whole machine army against us and turn Terminator into a reality rather than a cheesy robots-take-over-the-world-for-shits-and-giggles flick.
But the source code isn’t like DNA in an organism. Source code covers so much more ground than that. Imagine having an absolute blueprint of how every cell cluster in your body will react to any stimuli through your entire life and every process it will undertake from now until your death, including how it will age. That would be source code. Your DNA is not ever nearly that complete. It’s more like a list of suggestions and blueprints for raw materials.
(shrug) OK, fair enough.
I agree with you that reward/punishment conditioning of software is a goofy idea.
I was reading your comment here to indicate that we can constrain the behavior of human-level AGIs by just putting appropriate constraints in the code. (“You don’t want the machine to do something? Put in a boundry. [..] with a machine, you can just tell it not to do that.”)
I think that idea is importantly wrong, which is why I was responding to it, but if you don’t actually believe that then we apparently don’t have a disagreement.
Re: source code… if we’re talking about code that is capable of itself generating executable code as output in response to situations that arise (which seems implicit in the idea of a human-level AGI, given that humans are capable of generating executable code), it isn’t at all clear to me that its original source code comprises in any kind of useful way an absolute blueprint for how every part of it will react to any stimuli.
Again, sure, I’m not positing magic: whatever it does, it does because of the interaction between its source code and the environment in which it runs, there’s no kind of magic third factor. So, sure, given the source code and an accurate specification of its environment (including its entire relevant history), I can in principle determine precisely what it will do. Absolutely agreed. (Of course, in practice that might be so complicated that I can’t actually do it, but you aren’t claiming otherwise.)
If you don’t think the same is true of humans, then we disagree about humans, but I think that’s incidental.
Again, it really shouldn’t be doing that. It should have the capacity to learn new skills and build new neural networks to do so. That doesn’t require new code, it just requires a routine to initialize a new set of ANN objects at runtime.
If it somehow follows from that that there’s an absolute blueprint in it for how every part of it will react to any stimuli in a way that is categorically different from how human genetics specify how humans will respond to any environment, then I don’t follow the connection… sorry. I have only an interested layman’s understanding of ANNs.