My model of EY doesn’t know what the real EY knows. However, there seems to be overwhelming evidence that non-AI alignment is a bottleneck and that network learning similar to what’s occurring naturally is likely to be a relevant path to developing dangerously capable AI.
For my model of EY, “halt, melt and catch fire” seems overdetermined. I notice I am confused.
My model of EY doesn’t know what the real EY knows. However, there seems to be overwhelming evidence that non-AI alignment is a bottleneck and that network learning similar to what’s occurring naturally is likely to be a relevant path to developing dangerously capable AI.
For my model of EY, “halt, melt and catch fire” seems overdetermined. I notice I am confused.