There’s probably something that I’m missing, so sorry if this solution has already been posted in the original thread. I don’t really have the “oomph” the read them all… Anyway, hasn’t this class of problems already been solved in chapter 5 of Jaynes’ book?
If the AI has some tiny probability that the data he has received originated through some kind of deception, I think it’s only sensible that the hypothesis that the mugger is lying steals all the probability mass in the posterior distribution, at least linearly with the number of people he claims he can affect (but I would say exponentially).
The expected utility shouldn’t really be calculated on the posterior of the hypothesis “mugger possess magical power” but on the posterior of “mugger can affect the Matrix + mugger is lying”.
ETA: This allows you to control the posterior probability of the hypothesis independently from the claim of the mugger, thereby shielding the AI from acting on enormous disutility depending only from slightly less enormous improbability.
I did not downvote you, but I suspect it might not be the solution and might be the opening statement, most likely the bolded section.
There’s probably something that I’m missing, so sorry if this solution has already been posted in the original thread. I don’t really have the “oomph” the read them all… Anyway, hasn’t this class of problems already been solved in chapter 5 of Jaynes’ book?
I would guess that someone wanted you to have taken the time to read the previous thread.
There’s probably something that I’m missing, so sorry if this solution has already been posted in the original thread. I don’t really have the “oomph” the read them all… Anyway, hasn’t this class of problems already been solved in chapter 5 of Jaynes’ book?
If the AI has some tiny probability that the data he has received originated through some kind of deception, I think it’s only sensible that the hypothesis that the mugger is lying steals all the probability mass in the posterior distribution, at least linearly with the number of people he claims he can affect (but I would say exponentially).
The expected utility shouldn’t really be calculated on the posterior of the hypothesis “mugger possess magical power” but on the posterior of “mugger can affect the Matrix + mugger is lying”.
ETA: This allows you to control the posterior probability of the hypothesis independently from the claim of the mugger, thereby shielding the AI from acting on enormous disutility depending only from slightly less enormous improbability.
Will someone who downvoted explain what’s wrong with this solution? Feedback is much appreciated, thanks :)
I did not downvote you, but I suspect it might not be the solution and might be the opening statement, most likely the bolded section.
I would guess that someone wanted you to have taken the time to read the previous thread.