Ordinarily, yes, but you could imagine scenarios where agents have the option to erase their own memories or essentially commit group suicide. (I don’t believe these kinds of scenarios are extreme beyond belief—they could come up in transhuman contexts.) In this case nobody even remembers which action you chose, so there is no extrinsic motivation for signalling.
Why do people even signal anything? To get something for themselves from others. Why would signaling be outside the scope of consequentialism.
Ordinarily, yes, but you could imagine scenarios where agents have the option to erase their own memories or essentially commit group suicide. (I don’t believe these kinds of scenarios are extreme beyond belief—they could come up in transhuman contexts.) In this case nobody even remembers which action you chose, so there is no extrinsic motivation for signalling.