Right. And at the moment, I’m not sure if that’s even ideal. Here’s something like my thinking:
In order to advance social justice (which I take as the most likely step towards maximizing global utility), we need to maximize both our compassion (aka ability to desire globally eudaimonic consequences) and our rationality (aka ability to predict and control consequences). This should be pretty straightforward to intuit; by this (admittedly simplistic) model,
Global Outcome Utility = Compassion x Rationality.
The thing is, once Rationality raises above Compassion, it makes sense to spend the next epsilon resource units on increasing Compassion, rather than increasing Rationality, until Compassion is higher than Rationality again.
Also, sometimes it’s important to commit to a goal for the medium-term, to prevent thrashing. I’ve made a conscious effort, regarding social justice issues, to commit to a particular framework for six months, and only evaluate after that span has finished—otherwise I’m constantly course-correcting and feedback oscillations overwhelm the system.
That seems true—if you’ve got the right path to maximizing global utility. Making this call requires a certain baseline level of rationality, which we may or may not possess and which we’re very much prone to overestimating.
The consequences of not making the right call, or even of setting the bar too low whether or not you happen to pick the right option yourself, are dire: either stalemate due to conflicting goals, or a doomed fight against a culturally more powerful faction, or (and possibly worse) progress in the wrong direction that we never quite recognize as counterproductive, lacking the tools to do so. In any case eudaemonic improvement, if it comes, is only going to happen through random walk.
Right. And at the moment, I’m not sure if that’s even ideal. Here’s something like my thinking:
In order to advance social justice (which I take as the most likely step towards maximizing global utility), we need to maximize both our compassion (aka ability to desire globally eudaimonic consequences) and our rationality (aka ability to predict and control consequences). This should be pretty straightforward to intuit; by this (admittedly simplistic) model,
Global Outcome Utility = Compassion x Rationality.
The thing is, once Rationality raises above Compassion, it makes sense to spend the next epsilon resource units on increasing Compassion, rather than increasing Rationality, until Compassion is higher than Rationality again.
Also, sometimes it’s important to commit to a goal for the medium-term, to prevent thrashing. I’ve made a conscious effort, regarding social justice issues, to commit to a particular framework for six months, and only evaluate after that span has finished—otherwise I’m constantly course-correcting and feedback oscillations overwhelm the system.
That seems true—if you’ve got the right path to maximizing global utility. Making this call requires a certain baseline level of rationality, which we may or may not possess and which we’re very much prone to overestimating.
The consequences of not making the right call, or even of setting the bar too low whether or not you happen to pick the right option yourself, are dire: either stalemate due to conflicting goals, or a doomed fight against a culturally more powerful faction, or (and possibly worse) progress in the wrong direction that we never quite recognize as counterproductive, lacking the tools to do so. In any case eudaemonic improvement, if it comes, is only going to happen through random walk.
Greedy strategies tend to be fragile.