It’s like that precisely because it is easily predictable; as I said in another reply, an AI will experience its decisions as indeterminate, so anything it knows in advance in such a determinate way, will not be understood as a decision, just as I don’t decide to die if my brain is crushed, but I know that will happen. In the same way the AI will merely know that it will self-destruct if it is placed under water.
From this, it seems like your argument for why this will not appear in its decision algorithm, is simply that you have a specific definition for “decision” that requires the AI to “understand it as a decision”. I don’t know why the AI has to experience its decisions as indeterminate (indeed, that seems like a flawed design if its decisions are actually determined!).
Rather, any code that leads from inputs to a decision should be called part of the AI’s ‘decision algorithm’ regardless of how it ‘feels’. I don’t have a problem with an AI ‘merely knowing’ that it will make a certain decision. (and be careful - ‘merely’ is an imprecise weasel word)
It isn’t a flawed design because when you start running the program, it has to analyze the results of different possible actions. Yes, it is determined objectively, but it has to consider several options as possible actions nonetheless.
Why not?
In what way is it like that, and how is that relevant to the question?
It’s like that precisely because it is easily predictable; as I said in another reply, an AI will experience its decisions as indeterminate, so anything it knows in advance in such a determinate way, will not be understood as a decision, just as I don’t decide to die if my brain is crushed, but I know that will happen. In the same way the AI will merely know that it will self-destruct if it is placed under water.
From this, it seems like your argument for why this will not appear in its decision algorithm, is simply that you have a specific definition for “decision” that requires the AI to “understand it as a decision”. I don’t know why the AI has to experience its decisions as indeterminate (indeed, that seems like a flawed design if its decisions are actually determined!).
Rather, any code that leads from inputs to a decision should be called part of the AI’s ‘decision algorithm’ regardless of how it ‘feels’. I don’t have a problem with an AI ‘merely knowing’ that it will make a certain decision. (and be careful - ‘merely’ is an imprecise weasel word)
It isn’t a flawed design because when you start running the program, it has to analyze the results of different possible actions. Yes, it is determined objectively, but it has to consider several options as possible actions nonetheless.