Parents tend to be a bit obsessed with setting limits. Setting limits is sometimes necessary, but parents tend rely too much on reacting to limit crossings. If you trained a robot by only reacting to limit crossings then the robot might well spend all it’s time bouncing off the limit.
Think of a limit as a border on a region of acceptable behavior. The Kazdin method relies on incrementally (in small behavior shaping steps) drawing the child toward the optimal point in that region of acceptable behavior. If you train a robot this way, then the robot will tend stay sufficiently close to the optimal location, well away from the limits.
I think there is something to be learned from ‘training natural intelligences’ who also try to break out of the box so to speak. But most people here either don’t see the connection or consider it obviously wrong.
Thoughts in relation to AI learning...
Parents tend to be a bit obsessed with setting limits. Setting limits is sometimes necessary, but parents tend rely too much on reacting to limit crossings. If you trained a robot by only reacting to limit crossings then the robot might well spend all it’s time bouncing off the limit.
Think of a limit as a border on a region of acceptable behavior. The Kazdin method relies on incrementally (in small behavior shaping steps) drawing the child toward the optimal point in that region of acceptable behavior. If you train a robot this way, then the robot will tend stay sufficiently close to the optimal location, well away from the limits.
This reminds me of my significantly downvoted post about AI needing a caregiver.
I think there is something to be learned from ‘training natural intelligences’ who also try to break out of the box so to speak. But most people here either don’t see the connection or consider it obviously wrong.
The link on “AI needing a caregiver” links to your profile and I can’t find the post about AI needing a caregiver.
Link corrected to http://lesswrong.com/lw/ihx/rationality_quotes_september_2013/9r1f