The force of “scientific facts” is that they constrain the world.
In the context of this comment, the goal of FAI can be said to be to constrain the world by “moral facts”, just like laws of physics constrain the world by “physical facts”. This is the sense in which I mean “FAI=Physical Laws 2.0”.
Only in a useless way: there is a specific FAI that does the “truly right” thing, but the truthhood of rightness doesn’t stop you from having to code the rightness in. Goodness is not discoverably true: if you don’t already know exactly what goodness is, you can’t find out.
hmmm. That is interesting. Well, let us define the collection W_i of worlds run by superintelligences with the subscript i ranging over goals. No matter what i is, those worlds are going to look, to any agents in them, like worlds with “moral truths”.
However, any agent that learned the real physics of such a world would see that the goodness is written in to the initial conditions, not the laws.
In the context of this comment, the goal of FAI can be said to be to constrain the world by “moral facts”, just like laws of physics constrain the world by “physical facts”. This is the sense in which I mean “FAI=Physical Laws 2.0”.
Only in a useless way: there is a specific FAI that does the “truly right” thing, but the truthhood of rightness doesn’t stop you from having to code the rightness in. Goodness is not discoverably true: if you don’t already know exactly what goodness is, you can’t find out.
I’m describing the sense of post-FAI world.
hmmm. That is interesting. Well, let us define the collection W_i of worlds run by superintelligences with the subscript i ranging over goals. No matter what i is, those worlds are going to look, to any agents in them, like worlds with “moral truths”.
However, any agent that learned the real physics of such a world would see that the goodness is written in to the initial conditions, not the laws.