Indeed, it could be said that the first prediction really isn’t accurate, because the stated prediction was that the disease would kill you, not that the AI would convince you to kill yourself.
This might sound nit-picky, but you started it :)
At no point does the example answer claim that the disease killed you. It just claims that it’s certain (a) you won’t get rid of it, and (b) you will die. That’d be technically accurate if the oracle planned to kill you with a meme, just as it would also be accurate if it predicted a piano will fall on you.
(You never asked about pianos, and it’s just a very carefully limited oracle so it doesn’t volunteer that kind of information.)
(I guess even if we got FAI right the first time, there’d still be a big chance we’d all die just because we weren’t paying enough attention to what it was saying...)
This might sound nit-picky, but you started it :)
At no point does the example answer claim that the disease killed you. It just claims that it’s certain (a) you won’t get rid of it, and (b) you will die. That’d be technically accurate if the oracle planned to kill you with a meme, just as it would also be accurate if it predicted a piano will fall on you.
(You never asked about pianos, and it’s just a very carefully limited oracle so it doesn’t volunteer that kind of information.)
(I guess even if we got FAI right the first time, there’d still be a big chance we’d all die just because we weren’t paying enough attention to what it was saying...)