It means that when you look an an AI system, you can tell whether it’s FAI or not.
Look at it how? Look at it’s source code? I argued that we can write source code that will result in FAI, and you could recognize that. Look at the weights of it’s “brain”? Probably not, anymore than we can look at human brains and recognize what they do. Look at it’s actions? Definitely, FAI is an AI that doesn’t destroy the world etc.
I don’t see what voting systems have to do with CEV. The “E” part means you don’t trust what the real, current humans say, so to making them vote on anything is pointless.
The voting doesn’t have to actually happen. The AI can predict what we would vote for, if we had plenty of time to debate it. And you can get even more abstract than that and have the FAI just figure out the details of E itself.
The point is to solve the “coherent” part. That you can find a set of coherent values from a bunch of different agents or messy human brains. And to show that mathematicians have actually extensively studied a special case of this problem, voting systems.
That’s a meaningless expression without a context. Notably, we don’t have the same genes or the same brain structures. I don’t know about you, but it is really obvious to me that humans are not identical.
Compared to other animals, compared to aliens, yes we are incredibly similar. We do have 99.99% identical DNA, our brains all have the same structure with minor variations.
How do you know what’s false?
Did I claim that I did?
How do you know what’s fair? Is it an objective thing, something that exists in the territory?
I gave a precise algorithm for doing that actually.
Right, so the fat man gets thrown under the train… X-)
Which is the best possible outcome, vs killing 5 other people. But I don’t think these kinds of scenarios are realistic once we have incredibly powerful AI.
LOL. You’re just handwaving then. “And here, in the difficult part, insert magic and everything works great!”
I’m not handwaving anything… There is no magic involved at all. The whole scenario of persuading people is counterfactual and doesn’t need to actually be done. The point is to define more exactly what CEV is. It’s the values you would want if you had the correct beliefs. You don’t need to actually have the correct beliefs, to give your CEV.
Look at it how? Look at it’s source code? I argued that we can write source code that will result in FAI, and you could recognize that. Look at the weights of it’s “brain”? Probably not, anymore than we can look at human brains and recognize what they do. Look at it’s actions? Definitely, FAI is an AI that doesn’t destroy the world etc.
The voting doesn’t have to actually happen. The AI can predict what we would vote for, if we had plenty of time to debate it. And you can get even more abstract than that and have the FAI just figure out the details of E itself.
The point is to solve the “coherent” part. That you can find a set of coherent values from a bunch of different agents or messy human brains. And to show that mathematicians have actually extensively studied a special case of this problem, voting systems.
Compared to other animals, compared to aliens, yes we are incredibly similar. We do have 99.99% identical DNA, our brains all have the same structure with minor variations.
Did I claim that I did?
I gave a precise algorithm for doing that actually.
Which is the best possible outcome, vs killing 5 other people. But I don’t think these kinds of scenarios are realistic once we have incredibly powerful AI.
I’m not handwaving anything… There is no magic involved at all. The whole scenario of persuading people is counterfactual and doesn’t need to actually be done. The point is to define more exactly what CEV is. It’s the values you would want if you had the correct beliefs. You don’t need to actually have the correct beliefs, to give your CEV.
I think we have, um, irreconcilable differences and are just spinning wheels here. I’m happy to agree to disagree.