1) Examples of G* should be given a cost-benefit analysis. Yeah, scammers and parasites exist, but societies that use money still seem to better off than societies that try to get rid of it.
2) It’s unclear to me why you list CEV as one of the solutions. We use money to allocate limited resources. If magic nano-AI appears and resources become unlimited, why keep score at all? If it doesn’t and resources stay limited, how does CEV help you distribute bread, and would you really like it to replace money? (I wouldn’t. No caring daddies for me, please.)
In the case of a FAI G would be friendliness and G the friendliness definition. Avoiding a Goodhart’s Law effect on G is pretty much the core of the friendliness problem in a nutshell. An example of such a Goodhart’s Law effect would be the molecular smiley faces scenario.
The point I wanted to make was about Extrapolated volition as a strategy to avoid Goodhart’s law issues. If you extrapolate the volition of a person towards the “person he/she wants to be” and put a resulting goal as G*, it will be pretty much close to G as can be. I presented CEV as an example, since the audience is more familiar with it.
And FAWS, your definition of G and G* in the friendliness scenario is perfect. I’ve nothing more to add there.
1) Examples of G* should be given a cost-benefit analysis. Yeah, scammers and parasites exist, but societies that use money still seem to better off than societies that try to get rid of it.
2) It’s unclear to me why you list CEV as one of the solutions. We use money to allocate limited resources. If magic nano-AI appears and resources become unlimited, why keep score at all? If it doesn’t and resources stay limited, how does CEV help you distribute bread, and would you really like it to replace money? (I wouldn’t. No caring daddies for me, please.)
In the case of a FAI G would be friendliness and G the friendliness definition. Avoiding a Goodhart’s Law effect on G is pretty much the core of the friendliness problem in a nutshell. An example of such a Goodhart’s Law effect would be the molecular smiley faces scenario.
Ah, sorry. I’ve read the post as saying something different from what it actually says.
Good discussion.
The point I wanted to make was about Extrapolated volition as a strategy to avoid Goodhart’s law issues. If you extrapolate the volition of a person towards the “person he/she wants to be” and put a resulting goal as G*, it will be pretty much close to G as can be. I presented CEV as an example, since the audience is more familiar with it.
And FAWS, your definition of G and G* in the friendliness scenario is perfect. I’ve nothing more to add there.