Defining optimizers as an unpredictable process maximizing an objective function does not take into account algorithms that we can compute
Satisfying the property P “give the objective function higher values than an inexistence baseline” is not sufficient:
the lid satisfies (P) with “water quantity in bottle” but is just a rigid object that some optimizer put there. However, not the best counter-example because not a Yudkwoskian optimizer.
if a liver didn’t exist or did other random things then humans wouldn’t be alive and rich, so it satisfies (P) with “money in bank account” as the objective function. However, the better way to account for its behaviour (cf. Yudkowskian definition) is to see it as a sub-process of an income maximizer created by evolution.
One property that could work: have a step in the algorithm that provably augments the objective function (e.g. gradient ascent).
Properties I think are relevant:
intent: the lid did not “chose” to be there, humans did
doing something that the outer optimizer cannot do “as well” without using the same process as the inner optimizer : would be very tiring for humans to use our hands as lids. Humans cannot play go as well as Alpha Zero without actually running the algorithm.
Let me see if I got it right:
Defining optimizers as an unpredictable process maximizing an objective function does not take into account algorithms that we can compute
Satisfying the property P “give the objective function higher values than an inexistence baseline” is not sufficient:
the lid satisfies (P) with “water quantity in bottle” but is just a rigid object that some optimizer put there. However, not the best counter-example because not a Yudkwoskian optimizer.
if a liver didn’t exist or did other random things then humans wouldn’t be alive and rich, so it satisfies (P) with “money in bank account” as the objective function. However, the better way to account for its behaviour (cf. Yudkowskian definition) is to see it as a sub-process of an income maximizer created by evolution.
One property that could work: have a step in the algorithm that provably augments the objective function (e.g. gradient ascent).
Properties I think are relevant:
intent: the lid did not “chose” to be there, humans did
doing something that the outer optimizer cannot do “as well” without using the same process as the inner optimizer : would be very tiring for humans to use our hands as lids. Humans cannot play go as well as Alpha Zero without actually running the algorithm.