It seems at best fairly confused to say that an L-zombie is wrong because of something it would do if it were run, simply because we evaluated what it would say or do against the situation where it didn’t. Where you keep saying “is” and “concludes” and “being” you should be saying “would”, “would conclude”, and “would be”, all of which is a gloss for “would X if it were run”, and in the (counter-factual) world where the L-zombie “would” do those things it “would be running” and therefore “would be right”. Being careful with your tenses here will go a long way.
Nonetheless I think the concept of an L-zombie is useful, if only to point out that computation matters. I can write a simple program that encapsulates all possible L-zombies (or rather would express them all, if it were run), yet we wouldn’t consider that program to be those consciousnesses—a point well worth remembering in numerous examinations of the topic.
Keep in mind that people who apply serious life-changing ideas after reading about them in fiction are the exception rather than the norm. Most people who aren’t exceptionally intellect-oriented need to personally encounter someone who “has something” that they themselves wish they had, and then have some reason to think that they can imitate them in that respect. Fiction just isn’t it, except possibly in some indirect ways. Rationalist communities competing in the “real-world” arena of people living lives that other people want to and can emulate are a radically more effective angle for people who don’t identify strongly with their intellectual characteristics.