Loosemore’s claim could be steelmanned into the claim that the Maverick Nanny isnt likely...it requires an AI with goals, with harcoded goals, with hardcoded goals including a full explicit definition of happiness, and a buggy full explicit definition of happiness.. That’s a chain of premises.
Sorry, I may have been confused about what you were trying to say because you were responding to someone else, and I hadn’t come across the ‘steelman’ term before.
I withdraw ‘parody’ (sorry!) but … it isn’t quite what the logical structure of the paper was supposed to be.
It feels like you steelmanned it onto some other railroad track, so to speak.
Loosemore’s claim could be steelmanned into the claim that the Maverick Nanny isnt likely...it requires an AI with goals, with harcoded goals, with hardcoded goals including a full explicit definition of happiness, and a buggy full explicit definition of happiness.. That’s a chain of premises.
That isn’t even remotely what the paper said. It’s a parody.
Since it is a steelman it isnt supposed to be what the paper is saying,
Are you maintaining, in contrast, that the maverick nanny is flatly impossible?
Sorry, I may have been confused about what you were trying to say because you were responding to someone else, and I hadn’t come across the ‘steelman’ term before.
I withdraw ‘parody’ (sorry!) but … it isn’t quite what the logical structure of the paper was supposed to be.
It feels like you steelmanned it onto some other railroad track, so to speak.