Some existing work that does not rely on human modelling includes the formulation of safely interruptible agents, the formulation of impact measures (or side effects), approaches involving building AI systems with clear formal specifications (e.g., some versions of tool AIs), some versions of oracle AIs, and boxing/​containment.
I claim that all of these approaches appear not to rely on human modeling because they are only arguing for safety properties and not usefulness properties, and in order for them to be useful they will need to model humans. (The one exception might be tool AIs + formal specifications, but for the reasons in the parent comment I think that these will have an upper limit on usefulness.)
I claim that all of these approaches appear not to rely on human modeling because they are only arguing for safety properties and not usefulness properties, and in order for them to be useful they will need to model humans. (The one exception might be tool AIs + formal specifications, but for the reasons in the parent comment I think that these will have an upper limit on usefulness.)