Is Humbali right that generic uncertainty about maybe being wrong, without other extra premises, should increase the entropy of one’s probability distribution over AGI, thereby moving out its median further away in time?
Writing my response in advance of reading the answer, for fun.
One thought is that this argument fails to give accurate updates to other people. Almost 100% of people would give AGI medians much further away than what I think is reasonable, and if this method wants to be a generally useful method for getting better guesses by recognizing your uncertainty then it needs to push them towards shorter timelines, to whatever degree I trust short timelines.
In fact, this argument seems to only be useful for people whose AGI timelines are shorter than whatever the true timeline ends up being. If this were a real comment I would say this revealed behavior was unsurprising because the argument was generated to argue someone towards longer timelines and thus I couldn’t trust it to give reality-aligned answers.
It strikes me that such a system probably doesn’t exist? At the very least, I don’t know how to turn my “generic uncertainty about maybe being wrong, without other extra premises” into anything. You need to actually exert intelligence, actually study the subject matter, to get better probability distributions. Suppose I have a random number generator that I think gives 0 10% of the time and 1 90% of the time. I can’t improve this without exerting my intelligence- I can’t just shift towards 50% 0 and 50% 1 with no further evidence. That would rely on the assumption that my uncertainty signals I’m biased away from the prior of 50% 0 and 50% 1, which is completely arbitrary.
Note that if you have reason to think your guess is biased away from the prior, you can just shift in the direction of the prior. In this case, if you think you’re too confident relative to a random distribution over all years, which basically means you think your timeline is too short, you can shift in the direction of a random distribution over all years.
In this context, you can’t get better AGI estimates by just flattening over years. You need to actually leverage intelligence to discern reality. There are no magical “be smarter” functions that take the form of flattening your probability distribution at the end.
Ideas:
Someone else definitely builds and deploys an UFAI before you finish studying Clippy. (This would almost always happen?)
Clippy figures out that it’s in a prisoner’s dilemma with the other cobbled-together UFAI humanity builds, wherein each UFAI is given the option to shake hands with Humanity or pass 100% of the universe to whichever UFAI Humanity eventually otherwise deploys. Clippy makes some models, does some decision theory, predicts that if it defects and handshakes other UFAIs are more likely to defect too based on their models, and decides to not trade. The multiverse contains twice as many paperclips.
The fact that you’re going to forfeit half of the universe to Clippy leaks. You lose, but you get the rare novelty Game Over screen as compensation?
Ideas:
It could just act friendly for enough time to be sure it’s not in a simulation on the grounds that a civilization that could simulate what it was doing on its computers wouldn’t simulation-fakeout it for non-exotic reasons. Imagine Clippy mulling over its galaxy-sized supercomputing cluster and being like “Hm, I’m not sure if I’m still in those crude simulations those stupid monkeys put me in or I’m in the real world.”
I would be surprised if we’re able to build a simulation (before we build AGI) that I couldn’t discern as a simulation 99.99% of the time. Simulation technology just won’t advance fast enough.