The distinction between hardwiring and softwiring is, at above the most physical, electronic aspects of computer design, a matter of policy—something in the programmer’s mind and habits, not something out in the world that the programmer is manipulating. From any particular version of the software’s perspective, all of the program it is running is equally hard (or equally soft).
It may not be impossible to handicap an entity in some way analogous to your suggestion, but holding fiercely to the concept of hardwiring will not help you find it. Thinking about mechanisms that would accomplish the handicapping in an environment where everything is equally hardwired would be preferable.
There’s some evidence that chess AIs ‘personality’ (an emergent quality of their play) is related to a parameter of their evaluation function called ‘contempt’, which is something like (handwaving wildly) how easy the opponent is to manipulate. In general, AIs with higher contempt seek to win-or-lose more, and seek to draw less. What I’m trying to say is, your idea is not without merit, but it may have unanticipated consequences.
The distinction between hardwiring and softwiring is, at above the most physical, electronic aspects of computer design, a matter of policy—something in the programmer’s mind and habits, not something out in the world that the programmer is manipulating. From any particular version of the software’s perspective, all of the program it is running is equally hard (or equally soft).
It may not be impossible to handicap an entity in some way analogous to your suggestion, but holding fiercely to the concept of hardwiring will not help you find it. Thinking about mechanisms that would accomplish the handicapping in an environment where everything is equally hardwired would be preferable.
There’s some evidence that chess AIs ‘personality’ (an emergent quality of their play) is related to a parameter of their evaluation function called ‘contempt’, which is something like (handwaving wildly) how easy the opponent is to manipulate. In general, AIs with higher contempt seek to win-or-lose more, and seek to draw less. What I’m trying to say is, your idea is not without merit, but it may have unanticipated consequences.