I love the idea of modeling humans how they want to be modeled. I think of this as like a fuzzy pointer to human values, that sharpens itself? But I’m confused about how to implement this, or formalize this process.
I hadn’t seen your sequence, I’m a couple of posts in, it’s great so far. Does it go into formalizing the process you describe?
Does it go into formalizing the process you describe?
Nope, sorry! I’m still at the stage of understanding where formalizing it would mean leaving in a bunch of parameters that hide hard problems (E.g. “a measure of how agent-shaped a model augmented with a rule for extracting preferences is” or “a function that compares plans of action in different ontologies.”), so I didn’t really bother.
But if you’re around Lightcone, hit me up and we can chat and write things on whiteboards.
I love the idea of modeling humans how they want to be modeled. I think of this as like a fuzzy pointer to human values, that sharpens itself? But I’m confused about how to implement this, or formalize this process.
I hadn’t seen your sequence, I’m a couple of posts in, it’s great so far. Does it go into formalizing the process you describe?
Nope, sorry! I’m still at the stage of understanding where formalizing it would mean leaving in a bunch of parameters that hide hard problems (E.g. “a measure of how agent-shaped a model augmented with a rule for extracting preferences is” or “a function that compares plans of action in different ontologies.”), so I didn’t really bother.
But if you’re around Lightcone, hit me up and we can chat and write things on whiteboards.