There’s no room for human feedback between setting the values and implementing the optimal strategy.
Here and elsewhere I’ve advocated* that, rather than using Hanson’s idea of target-values that are objectively verifiable like GDP, futarchy would do better to add human feedback in the stage of the process where it gets decided whether the goals were met or not. Whoever proposed the goal would decide after the prediction deadline expired, and thus could respond to any improper optimizing by refusing to declare the goal “met” even if it technically was met.
[ * You can definitely do better than the ideas on that blog post, of course.]
Here and elsewhere I’ve advocated* that, rather than using Hanson’s idea of target-values that are objectively verifiable like GDP, futarchy would do better to add human feedback in the stage of the process where it gets decided whether the goals were met or not. Whoever proposed the goal would decide after the prediction deadline expired, and thus could respond to any improper optimizing by refusing to declare the goal “met” even if it technically was met.
[ * You can definitely do better than the ideas on that blog post, of course.]