Finally, I’d note that having a “security mindset” seems like a terrible approach for raising human children to have good values
Do you have kids, or any experience with them? (There are three small children in the house I live in.) I think you might want to look into childproofing, and meditate on its connection to security mindset.
Yes, this isn’t necessarily related to the ‘values’ part, but for that I would suggest things like Direct Instruction, which involves careful curriculum design to generate lots of examples so that students will reliably end up inferring the correct rule.
In short, I think the part of ‘raising children’ which involves the kids being intelligent as well and independently minded does benefit from security mindset.
As you mention in the next paragraph, this is a long-standing disagreement; I might as well point at the discussion of the relevance of raising human children to instilling goals in an AI in The Detached Lever Fallacy. The short summary of it is that humans have a wide range of options for their ‘values’, and are running some strategy of learning from their environment (including their parents and their style of raising children) which values to adopt. The situation with AI seems substantially different—why make an AI design that chooses whether to be good or bad based on whether you’re nice to it, when you could instead have it choose to always be good? [Note that this is distinct from “always be nice”; you could decide that your good AI can tell users that they’re being bad users!]
Do you have kids, or any experience with them? (There are three small children in the house I live in.) I think you might want to look into childproofing, and meditate on its connection to security mindset.
Yes, this isn’t necessarily related to the ‘values’ part, but for that I would suggest things like Direct Instruction, which involves careful curriculum design to generate lots of examples so that students will reliably end up inferring the correct rule.
In short, I think the part of ‘raising children’ which involves the kids being intelligent as well and independently minded does benefit from security mindset.
As you mention in the next paragraph, this is a long-standing disagreement; I might as well point at the discussion of the relevance of raising human children to instilling goals in an AI in The Detached Lever Fallacy. The short summary of it is that humans have a wide range of options for their ‘values’, and are running some strategy of learning from their environment (including their parents and their style of raising children) which values to adopt. The situation with AI seems substantially different—why make an AI design that chooses whether to be good or bad based on whether you’re nice to it, when you could instead have it choose to always be good? [Note that this is distinct from “always be nice”; you could decide that your good AI can tell users that they’re being bad users!]