I think I have left a loophole. In your example, Omega is analysing the agent by analysing its outputs in unrelated, and most of all, unspecified problems. I think the end result should only depend on the output of the agent on the problem at hand.
Here’s a possibly real life variation. Instead of simulating the agent, you throw a number of problems at it beforehand, without telling it it will be related to a future problem. Like, throw an exam at a human student (with a real stake at the end, such as grades). Then, later you submit the student to the following problem:
Welcome to my dungeon. Sorry for the headache, but I figured you wouldn’t have followed someone like me in a place like this. Anyway. I was studying Decision Theory, and wanted to perform an experiment. So, I will give you a choice:
Option 1 : you die a most painful death. See those sharp, shimmering tools? Lots of fun.
Option 2 : if I think you’re not the kind of person who makes good life decisions, I’ll let you go unharmed. Hopefully you will harm yourself later. On the other hand, if I think you are the kind of person who makes good life decisions, well, too bad for you: I’ll let you most of you go, but you’ll have to give me your hand.
Option 2? Well that doesn’t surprise me, though it does disappoint me a little. I would have hoped, after 17 times already… well, no matter. So, do you make good decisions? Sorry, I’m afraid “no” isn’t enough. Let’s see… oh, you’re you’re applying for College, if I recall correctly. Yes, I did my homework. I’m studying, remember? So, let’s see your SAT scores. Oh, impressive. That should explain why you never left home those past three weeks. Looks like you know how to trade off short term well being for long term projects. Looks like a good life decision.
So. I’m not exactly omniscient, but this should be enough. I’ll let you go. But first, I believe you’ll have to put up with a little surgery job.
Sounds like something like that could “reasonably” happen in real life. But I don’t think it’s “fair” either, if only because being discriminated for being capable of taking good decisions is so unexpected.
I think I have left a loophole. In your example, Omega is analysing the agent by analysing its outputs in unrelated, and most of all, unspecified problems. I think the end result should only depend on the output of the agent on the problem at hand.
Here’s a possibly real life variation. Instead of simulating the agent, you throw a number of problems at it beforehand, without telling it it will be related to a future problem. Like, throw an exam at a human student (with a real stake at the end, such as grades). Then, later you submit the student to the following problem:
Sounds like something like that could “reasonably” happen in real life. But I don’t think it’s “fair” either, if only because being discriminated for being capable of taking good decisions is so unexpected.