I think part of the problem is not that the audience is stupider than you imagine but that people sometimes use different techniques to learn the same thing. Your explanation may seem obvious to you but confuse everyone else while an alternative explanation that you would have difficulty understanding yourself would be obvious to others.
One example of this would be that some people learn better by concrete examples while others learn better by abstract ones.
I have been trying to invent an AI for over a year, although I haven’t made a lot of progress lately. My current approach is a bit similar to how our brain works according to “Society Of Mind”. That is, when it’s finished the system is supposed to consist of a collection of independent, autonomous units that can interact and create new units. The tricky part is of course the prioritization between the units. How can you evaluate how promising an approach is? I recently found out that something like this has already been tried, but that has happened to me several times by now as I started thinking and writing about AI before I had read any books on that subject (I didn’t have a decent library in school).
I have no great hopes that I will actually manage to create something usefull with this, but even a tiny probability of a working AI is worth the effort (as long as it’s friendly, at least).