Similar considerations apply to possible singleton-ish AGIs that might be architecturally constrained to varying levels of efficiency in optimization, e.g. some decision theories might coordinate poorly and so waste the cosmic commons. Thus optimizing for an AGI’s mere friendliness to existent humans could easily be setting much too low a bar, at least for perspectives and policies that include a total utilitarian bent—something much closer to “ideal” would instead have to be the target.
This is why I don’t like “AI safety”. It’s implicitly setting the bar too low. “Friendliness” in theory has the same problem, but Eliezer actually seems to be aiming at “ideal”, while working under that name. When Luke asked for suggestions for how to rename their research program, I suggested “optimal AI”, but he didn’t seem to like that very much.
Yes, but total utilitarianism has an unbounded utility function, with the present state being vanishingly low on that scale, so it pressures to expand like few other theories do.
Similar considerations apply to possible singleton-ish AGIs that might be architecturally constrained to varying levels of efficiency in optimization, e.g. some decision theories might coordinate poorly and so waste the cosmic commons. Thus optimizing for an AGI’s mere friendliness to existent humans could easily be setting much too low a bar, at least for perspectives and policies that include a total utilitarian bent—something much closer to “ideal” would instead have to be the target.
This is why I don’t like “AI safety”. It’s implicitly setting the bar too low. “Friendliness” in theory has the same problem, but Eliezer actually seems to be aiming at “ideal”, while working under that name. When Luke asked for suggestions for how to rename their research program, I suggested “optimal AI”, but he didn’t seem to like that very much.
How about BAI for Best AI?
This is actually more or less what I am getting at when I talk about distinctions between FAI and Obedient AI.
Life got so much simpler when I went anti-total utilitarianism :-)
But actually your point stands, in a somewhat weaker form, for any system that likes resources and dislikes waste.
Mmh, life should be “hard” for proponents of any theory of population ethics. See Arrhenius (2000) and Blackorby, Bossert & Donaldson (2003).
Yes, but total utilitarianism has an unbounded utility function, with the present state being vanishingly low on that scale, so it pressures to expand like few other theories do.