Yes, you can use yourself as a random sample but at best only within a reference class of “people who use themselves as a random sample for this question in a sufficiently similar context to you”. That might be a population of 1.
For example, suppose someone without symptoms has just found out that they have genes for a disease that always progresses to serious illness. They have a mathematics degree and want to use their statistical knowledge to estimate how long they have before becoming debilitated.
They are not a random sample from the reference class of people who have these genes. They are from people who have the genes and didn’t show symptoms before finding that out and did so during adulthood (almost certainly) and live in a time and place and with sufficient capacity to earn a mathematics degree and of suitable mindset to ask themselves this question and so on.
Any of these may be relevant information for estimating the distribution, especially if the usual age of onset is in childhood or the disease also reduces intellectual capacity or affects personality in general.
Relating back to the original doomsday problem: suppose that in the reference class of all civilizations, most discover some principle that conclusively resolves the Doomsday problem not long after formulating it (within a few hundred years or so). It doesn’t really matter what that resolution happens to be, there are plenty of possibilities.
If that is the case, then most people who even bother to ask the Doomsday question without already knowing the answer are those in that narrow window of time where their civilization is sophisticated enough to ask the question without being sophisticated enough to answer it, regardless of how long those civilizations might last or how many people exist after resolving the question.
To the extent that the Doomsday reasoning is valid at all (which it may not be), all that it provides is an estimate of time until most people stop asking the Doomsday question in a similar context to yours. Destruction of the species is not required for that. Even it becoming unfashionable is enough.
This is absolutely false.
We are trying to design them to be able to explain their decisions and follow clear patterns of deduction, but we are still largely failing. In practice they often arrive at an answer in a flash (whether correct or incorrect), and this was almost universal for earlier models without the more recent development of “chain of thought”.
Even in “reasoning” models there is plenty of evidence that they often still do have an answer largely determined before starting any “chain of thought” tokens and then make up reasons for it, sometimes including lies.