Responding to your #1, do you think we’re on track to handle the cluster of AGI Ruin scenarios pointed at in 16-19? I feel we are not making any progress here other than towards verifying some properties in 17.
16: outer optimization even on a very exact, very simple loss function doesn’t produce inner optimization in that direction. 17: on the current optimization paradigm there is no general idea of how to get particular inner properties into a system, or verify that they’re there, rather than just observable outer ones you can run a loss function over. 18: There’s no reliable Cartesian-sensory ground truth (reliable loss-function-calculator) about whether an output is ’aligned′ 19: there is no known way to use the paradigm of loss functions, sensory inputs, and/or reward inputs, to optimize anything within a cognitive system to point at particular things within the environment
Responding to your #1, do you think we’re on track to handle the cluster of AGI Ruin scenarios pointed at in 16-19? I feel we are not making any progress here other than towards verifying some properties in 17.