Mathematician / Learner / Healer / Awed by bio-complexity
Currently researching study-group structures that resemble metabolism.
Mathematician / Learner / Healer / Awed by bio-complexity
Currently researching study-group structures that resemble metabolism.
That paper is new to me—and yes related and interesting. I like their use of a ‘glimpse’ = more resolution in centre, less resolution further away.
About ‘hard’ and ‘soft’ - if ‘hard’ and ‘soft’ mean what I think they do, then yes, the attention is ‘hard’. It forces some weights to zero that in a fully connected network could end up non zero. That might require some attention in training, as a network that has attention ‘way off’ where it should be has no gradient to give it better solutions.
Thanks for the link to the paper and the idea of thinking about to what extent the attention is/is-not differentiable.
I’ll use ‘ethical’ here as a shorthand for ‘serve the public interest’ even if that is not exactly what ethical normally means. One approach is to work on two related sub problems:
1. How to make more companies more ethical
2. How to make the more ethical companies have more influence on politicians.
It’s not obvious how to make headway with either sub problem. Journalism about ethical companies might help. It would require something of a shift in journalism from the attention-grabbing kind to the exploring-explaining kind. Publicity about ethical companies could encourage more companies to be more ethical. That journalism could change in this way is not entirely folorn hope.
For ethical companies to have more influence politically, they need to be financially successful and generating significant numbers of jobs. Ethical and financially successful seem almost contradictory. Aligning those is a sub goal that could be explored more in its own right. Good places to start would be to look at education and at health care, and look to see what forces pull them away from either being ethical or being financially successful. Some work done in Africa by educational and health charities on very low budgets show how big leaps in service quality can be made economically. Perhaps some innovations there can be translated back to first world economies, and do good more profitably than the incumbents.
The political disconnect (or misconnect) is a huge problem. The situation has a tremendous amount of inertia to it, because making politics more ethical entails many changes throughout society. It’s not as it seems. One can’t simply make politics more ethical and then see society change for the better. It’s almost the other way round.
Thanks. Paragraph added.
I think you’ll find any online exercises boring (too repetitive) or frustrating (too hard, with not enough clues with the exercises as to what you are not understanding). I think you need a person who knows the material and who is willing to trade skills with you.
I have a decent maths background, not stellar, and can have a go at an email exchange. In return what I would like from you ‘as trade’ is help from you with strategies for personal growth.
If you are interested in such a trade we can work out exactly how by emails. I’d imagine we’d be incrementally jointly creating small resources for those two areas of learning.
I can offer some linear algebra exercises around why a radar reflector works, and how eigen vectors matter in understanding the origins of life. We can also look at specific 3 blue 1 brown videos in that series and create new exercises directly around the content. 3 blue 1 brown is brilliant. I love what he is doing.
PM me if you are interested.
The image synthesis is impressive. The Transformer network paper looks intriguing. I will need to read again much more slowly, and not skim to understand it. Thanks for both the links and feedback on aligning AI.
I agree the ideas really are about progressing AI, rather than progressing AI specifically in a positive way. As a post-hoc justification though, exploring attention mechanisms in machine learning indicates that what AI ‘cares about’ may be pretty deeply embedded in its technology. Your comment, and my need to justify post-hoc, set me to the task of making that link more concrete, so let me expand on that.
I think many animals have almost hard-wired attention mechanisms for alerting them to eyes. Things with eyes are animate, and may need a reaction more rapidly than rocks or trees do. Animals do have almost hard-wired attention mechanisms for sudden movement too.
What alerting or attention setting mechanisms will AIs for self-driving cars have? Probably they will prioritise sudden movement detection. Probably they won’t have any specific mechanism for alerting to eyes. Perhaps that’s a mistake.
I’ve noticed that the bounding boxes in some videos of ‘what a car sees’ are pretty good for following vehicles, but flick on and off for bounding boxes around people on the sidewalk. The stable bounding boxes are relatively square. The unstable bounding boxes are tall and thin.
Now just maybe, we want to make a visual architecture that is very good at distinguishing tall thin objects that could be people, from tall thin objects that could be lamp posts. That has implications all the way down to the visual pipeline. The car is not going to be good at solving trolley problems if it can tell trucks from cars, but can’t tell people from lamp posts.