I would say that it’s considerably more important for everyday life for most people than knowing whether tomatoes have genes.
What I think you should be arguing here (and what on one level I think you where implicitly arguing), is that in a sufficiently high trust society one should spend more resources on educating people about global warming than tomatoes having genes if one wants to help them.
It is for their own good, but not their personal good. Like a vaccine shot that has a high rate of nasty side effects but helps keep an infectious disease at bay. If you care about them, it can be rational to take the shot yourself if that’s an effective signal to them that you aren’t trying to fool them. By default they will be modelling you like one of them and interpret your actions accordingly. Likewise if you just happen to be better enough at deceit than they will fail detecting it, you can still use that signal to help them, even if take a fake shot.
Humans are often predictably irrational. The arational processes that maintain the high trust equilibrium can be used to let you take withdrawals of cooperative behaviour from the bank when the rational incentives just aren’t there. What game theory is good for in this case is realizing how much you are withdrawing, since a rational game theory savvy agent is a pretty good benchmark for some cost analysis. You naturally need to think about the cost to quickly gauge if the level of trust is high enough in a society and further more if you burden it in this way, is the equilibrium still stable in the midterm?
What I think you should be arguing here (and what on one level I think you where implicitly arguing), is that in a sufficiently high trust society one should spend more resources on educating people about global warming than tomatoes having genes if one wants to help them.
It is for their own good, but not their personal good. Like a vaccine shot that has a high rate of nasty side effects but helps keep an infectious disease at bay. If you care about them, it can be rational to take the shot yourself if that’s an effective signal to them that you aren’t trying to fool them. By default they will be modelling you like one of them and interpret your actions accordingly. Likewise if you just happen to be better enough at deceit than they will fail detecting it, you can still use that signal to help them, even if take a fake shot.
Humans are often predictably irrational. The arational processes that maintain the high trust equilibrium can be used to let you take withdrawals of cooperative behaviour from the bank when the rational incentives just aren’t there. What game theory is good for in this case is realizing how much you are withdrawing, since a rational game theory savvy agent is a pretty good benchmark for some cost analysis. You naturally need to think about the cost to quickly gauge if the level of trust is high enough in a society and further more if you burden it in this way, is the equilibrium still stable in the midterm?
If its not, teach them about tomatoes.