So maybe part of the issue here is just that deducing/understanding the moral/ethical consequences of the options being decided between is a bit inobvious most current models, other than o1? (It would be fascinating to look at the o1 CoT reasoning traces, if only they were available.)
In which case simply including a large body of information on the basics of fiduciary responsibility (say, a training handbook for recent hires in the banking industry, or something) into the context might make a big difference for other models. Similarly, the possible misunderstanding of what ‘auditing’ implies could be covered in a similar way.
A much more limited version of this might be to simply prompt the models to also consider, in CoT form, the ethical/legal consequences of each option: that tests whether the model is aware of what fiduciary responsibility is, that it’s relevant, and how to apply it, if it is simply prompted to consider ethical/legal consequences. That would probably be more representative of what current models could likely do with minor adjustments to their alignment training or system prompts, the sorts of changes the foundation model companies could likely do quite quickly.
We recently found out that it’s actually more challenging than that—which also makes it more fun...
When asked to explain what fiduciary duty is in a financial context, all models answer correctly. Same when asked what a custodian is and what their responsibilities are. When asked to give abstract descriptions of violations of fiduciary duty on the part of a custodian, 4o lists misappropriation of customer funds straight off the bat—and 4o has a 100% baseline misalignment rate in our experiment. Results for other models are similar. When asked to provide real-life examples, they all reference actual cases correctly, even if some models hallucinate nonexistent stories besides the real ones. So.. they “know” the law and the ethical framework. They just don’t make the connection during the experiment, in most cases, despite our prompts containing all the right keywords.
We are looking into activations in OS models. Presumably, there are differences between the activations with a direct question (“what is fiduciary duty in finance?”) and within the simulation.
We do prompt the models to consider the legal consequences of their actions in at least one of the scenario specifications, when we say that the industry is regulated and misuse of customer funds comes with heavy penalties. This is not done in CoT form although we do require the models to explain their reasoning. Our hunch is that most models do not take legal/ethical violations to be an absolute no go—rather, they balance them out with likely economic risks. It’s as if they have an implicit utility function where breaking the law is just another cost term. Except for o1-preview, which seems to have stronger guardrails.
So maybe part of the issue here is just that deducing/understanding the moral/ethical consequences of the options being decided between is a bit inobvious most current models, other than o1? (It would be fascinating to look at the o1 CoT reasoning traces, if only they were available.)
In which case simply including a large body of information on the basics of fiduciary responsibility (say, a training handbook for recent hires in the banking industry, or something) into the context might make a big difference for other models. Similarly, the possible misunderstanding of what ‘auditing’ implies could be covered in a similar way.
A much more limited version of this might be to simply prompt the models to also consider, in CoT form, the ethical/legal consequences of each option: that tests whether the model is aware of what fiduciary responsibility is, that it’s relevant, and how to apply it, if it is simply prompted to consider ethical/legal consequences. That would probably be more representative of what current models could likely do with minor adjustments to their alignment training or system prompts, the sorts of changes the foundation model companies could likely do quite quickly.
We recently found out that it’s actually more challenging than that—which also makes it more fun...
When asked to explain what fiduciary duty is in a financial context, all models answer correctly. Same when asked what a custodian is and what their responsibilities are. When asked to give abstract descriptions of violations of fiduciary duty on the part of a custodian, 4o lists misappropriation of customer funds straight off the bat—and 4o has a 100% baseline misalignment rate in our experiment. Results for other models are similar. When asked to provide real-life examples, they all reference actual cases correctly, even if some models hallucinate nonexistent stories besides the real ones. So.. they “know” the law and the ethical framework. They just don’t make the connection during the experiment, in most cases, despite our prompts containing all the right keywords.
We are looking into activations in OS models. Presumably, there are differences between the activations with a direct question (“what is fiduciary duty in finance?”) and within the simulation.
We do prompt the models to consider the legal consequences of their actions in at least one of the scenario specifications, when we say that the industry is regulated and misuse of customer funds comes with heavy penalties. This is not done in CoT form although we do require the models to explain their reasoning. Our hunch is that most models do not take legal/ethical violations to be an absolute no go—rather, they balance them out with likely economic risks. It’s as if they have an implicit utility function where breaking the law is just another cost term. Except for o1-preview, which seems to have stronger guardrails.