Hi! Equivarience is a very powerful tool and is widely applied in various branches of physics. Geometric Algebra Transformers generalize the concept of equivarience to transformers for essentially any group, e.g., $SO(3,1), SO(3)$ etc.
In fact, you can achieve drastic simplification and performance boosts by using equivariant models. An impressive example is in this paper: 19 Parameters Is All You Need: Tiny Neural Networks for Particle Physics (arXiv:2310.16121). An important point is that equivarient networks often significantly push capabilities.
A more general discussion on equivariant transformers is here: Geometric Algebra Transformer (arXiv:2305.18415).
Current theme: default
Less Wrong (text)
Less Wrong (link)
Arrow keys: Next/previous image
Escape or click: Hide zoomed image
Space bar: Reset image size & position
Scroll to zoom in/out
(When zoomed in, drag to pan; double-click to close)
Keys shown in yellow (e.g., ]) are accesskeys, and require a browser-specific modifier key (or keys).
]
Keys shown in grey (e.g., ?) do not require any modifier keys.
?
Esc
h
f
a
m
v
c
r
q
t
u
o
,
.
/
s
n
e
;
Enter
[
\
k
i
l
=
-
0
′
1
2
3
4
5
6
7
8
9
→
↓
←
↑
Space
x
z
`
g
Hi! Equivarience is a very powerful tool and is widely applied in various branches of physics. Geometric Algebra Transformers generalize the concept of equivarience to transformers for essentially any group, e.g., $SO(3,1), SO(3)$ etc.
In fact, you can achieve drastic simplification and performance boosts by using equivariant models. An impressive example is in this paper: 19 Parameters Is All You Need: Tiny Neural Networks for Particle Physics (arXiv:2310.16121). An important point is that equivarient networks often significantly push capabilities.
A more general discussion on equivariant transformers is here: Geometric Algebra Transformer (arXiv:2305.18415).