Does anyone have a review of Jane Ku’s “Metaethica.AI”? Nate and Jessica get acknowledgements—maybe you have a gloss? I’m having a little trouble figuring out what’s going on. From giving it an hour or so, it seems like it’s using functional isomorphism to declare what pre-found ‘brains’ in a pre-found model of the world are optimizing, and then sort of vaguely constructing a utility function over external referents found by more functional isomorphism (Ramsey-Lewis method).
Am I right that it doesn’t talk about how to get the models it uses? That it uses functional isomorphism relatively directly, with few (I saw something about mean squared error in the pseudocode, but couldn’t really decipher it) nods to how humans might have models that aren’t functionally isomorphic to the real world, and the most-isomoprhic thing out there might not be what humans want to refer to?
Does anyone have a review of Jane Ku’s “Metaethica.AI”? Nate and Jessica get acknowledgements—maybe you have a gloss? I’m having a little trouble figuring out what’s going on. From giving it an hour or so, it seems like it’s using functional isomorphism to declare what pre-found ‘brains’ in a pre-found model of the world are optimizing, and then sort of vaguely constructing a utility function over external referents found by more functional isomorphism (Ramsey-Lewis method).
Am I right that it doesn’t talk about how to get the models it uses? That it uses functional isomorphism relatively directly, with few (I saw something about mean squared error in the pseudocode, but couldn’t really decipher it) nods to how humans might have models that aren’t functionally isomorphic to the real world, and the most-isomoprhic thing out there might not be what humans want to refer to?
(I suspect you weren’t asking me, but just in case you were, I don’t know the answers to these questions; they’re pretty far outside of my expertise.)