[this is also how to get into neglectedness again, which EA adopted as a principle but recently forgot]
from Charles Rosenbauer:
This is neat, but this does little to nothing to optimize non-AI compute. Modern CPUs are insanely wasteful with transistors, plenty of room for multiple orders of magnitude of optimization there. This is only a fraction of the future of physics-optimized compute.
Are exotic computing paradigms (ECPs) pro-alignment?
cf https://twitter.com/niplav_site/status/1760277413907382685
They are orthogonal to the “scale is all you need” people, and the “scale is all you need” thesis is the hardest for alignment/interpretability
some examples of alternatives: https://www.lesswrong.com/posts/PyChB935jjtmL5fbo/time-and-energy-costs-to-erase-a-bit, Normal Computing, https://www.lesswrong.com/posts/ngqFnDjCtWqQcSHXZ/safety-of-self-assembled-neuromorphic-hardware, computing-related thiel fellows (eg Thomas Sohmers, Tapa Ghosh)
[this is also how to get into neglectedness again, which EA adopted as a principle but recently forgot]
from Charles Rosenbauer:
This is neat, but this does little to nothing to optimize non-AI compute. Modern CPUs are insanely wasteful with transistors, plenty of room for multiple orders of magnitude of optimization there. This is only a fraction of the future of physics-optimized compute.