Some questions:
How will you make money in the future to pay back the loan?
Why aren’t you doing that now, even on a part-time basis?
Is there one academic physicist who will endorse your specific research agenda as worthwhile?
Likewise for an academic philosopher?
Likewise for anyone other than yourself?
Why won’t physicists doing ordinary physics (who are more numerous, have higher ability, and have better track record of productivity) solve your problems in the course of making better predictive models?
How would this particular piece of work help with your larger interests? Would it cause physicists to work on this topic? Provide a basis for assessing your productivity or lack thereof?
Why not spend some time programming or tutoring math? If you work at Google for a year you can then live off the proceeds for several years in Bali or the like. A moderate amount of tutoring work could pay the rent.
Chalmers’ short comment in your link amounts to just Chalmers expressing enthusiasm for ontologically basic mental properties, not any kind of recommendation for your specific research program.
To be frank, the Outside View says that most people who have achieved little over many years of work will achieve little in the next few months. Many of them have trouble with time horizons, lack of willpower, or other problems that sabotage their efforts systematically, or prefer to indulge other desires rather than work hard. These things would hinder both scientific research and paid work. Refusing to self-finance with a lucrative job, combined with the absence of any impressive work history (that you have made clear in the post I have seen) is a bad signal about your productivity, your reasons for asking us for money, and your ability to eventually pay it back.
No one else seems to buy your picture of what is most important (qualia+safe AI). Have you actually thought through and articulated a model, with a chain of cause and effect, between your course of research and your stated aims of affecting AI? Which came first, your desire to think about quantum consciousness theories or an interest in safe AI? It seems like a huge stretch.
I’m sorry to be so blunt, but if you’re going to be asking for money on Less Wrong you should be able to answer such questions.