Now, if said grad student did come to the thesis adviser, but their motivation was that they’ve been taught from a very young age that they should do math. Is there initiative?
Not sure. You could argue both points in this situation.
Assuming that such entities are possible, do you or do you not think there’s a risk of the AI getting out of control.
Any AI can get out of control. I never denied that. My issue is with how that should be managed, not whether it can happen.
So, what you’ve said is one evolved desire overriding another would still seem to be a bug.
Not sure. You could argue both points in this situation.
Any AI can get out of control. I never denied that. My issue is with how that should be managed, not whether it can happen.
I suppose it would.
Ah. In that case, there’s actually very minimal disagreement.